首页
学习
活动
专区
圈层
工具
发布
    • 综合排序
    • 最热优先
    • 最新优先
    时间不限
  • 来自专栏信数据得永生

    TensorFlow HOWTO 5.1 循环神经网络(时间序列)

    r_sqr_ = sess.run(r_sqr, feed_dict={x: x_test, y: y_test}) r_sqrs.append(r_sqr_) 每一百步打印损失和度量值 if e % 100 == 0: print(f'epoch: {e}, loss: {loss_}, r_sqr: {r_sqr_}') 得到模型对训练特征和测试特征的预测值。 , loss: 0.021203888157209534, r_sqr: 0.5588744061563752 epoch: 800, loss: 0.020631124908383643, r_sqr , r_sqr: 0.34215312925023267 epoch: 1400, loss: 0.01680496271967236, r_sqr: 0.32958240891225643 epoch r_sqr: 0.31973211945726576 epoch: 1700, loss: 0.014981184565736732, r_sqr: 0.34878737027421214 epoch:

    65090发布于 2019-02-15
  • 来自专栏OI

    义乌中学暑假集训 2021.07.12 C

    read(sy),read(tx),read(ty); #define LS(x) ((x)*((x)+1)%P*(2*(x)+1)%P*Inv6%P) #define Sqr (LS(ty)-LS(sy)+Sqr(sy)-Sqr(ty)+Sqr(sx)*(ty-sy)%P):(LS(sy)-LS(ty)+Sqr(sx)*(sy-ty))) #define CY (LS(tx)-LS(sx)+Sqr(sx)-Sqr(tx)+Sqr(sy)*(tx-sx)%P):(LS(sx)-LS(tx)+Sqr(sy)*(sx-tx))) #define CZ (2*(LS(tx)-LS(sx))-Sqr(tx)+Sqr(sx)+2*(LS(tx)-LS(sx)-Sqr(tx)+Sqr(sx))%P):(2*(LS(sx)-LS(tx))-Sqr(sx)+Sqr (tx)+2*(LS(sx)-LS(tx))%P)) Ans=(Sqr(tx)*Sqr(ty)%P-Sqr(sx)*Sqr(sy))%P; if(sx==tx){Ans+

    56730编辑于 2022-09-19
  • 来自专栏信数据得永生

    TensorFlow HOWTO 4.2 多层感知机回归(时间序列)

    r_sqr_ = sess.run(r_sqr, feed_dict={x: x_test, y: y_test}) r_sqrs.append(r_sqr_) 每一百步打印损失和度量值 if e % 100 == 0: print(f'epoch: {e}, loss: {loss_}, r_sqr: {r_sqr_}') 得到模型对训练特征和测试特征的预测值。 , r_sqr: -0.19979831972491247 epoch: 600, loss: 1386.2307618333675, r_sqr: -0.18952804825771152 epoch , r_sqr: -0.09806183013209635 epoch: 1400, loss: 1272.368278888685, r_sqr: -0.08256632457099822 epoch r_sqr: -0.05016766677562812 epoch: 1700, loss: 1220.1688168734415, r_sqr: -0.033328289031115066 epoch

    60050发布于 2019-02-15
  • 来自专栏信数据得永生

    TensorFlow HOWTO 2.2 支持向量回归(软间隔)

    r_sqr_ = sess.run(r_sqr, feed_dict={x: x_test, y: y_test}) r_sqrs.append(r_sqr_) 每一百步打印损失和度量值 if e % 100 == 0: print(f'epoch: {e}, loss: {loss_}, r_sqr: {r_sqr_}') 得到拟合直线: x_min = , r_sqr: 0.7023977527848786 epoch: 600, loss: 0.09420462812797847, r_sqr: 0.7033420189633286 epoch: 700 , loss: 0.09420331500841268, r_sqr: 0.7040990336920706 epoch: 800, loss: 0.09420013554417629, r_sqr: , r_sqr: 0.7085666625849087 epoch: 1400, loss: 0.09419430203474248, r_sqr: 0.7086043351158677 epoch:

    54640发布于 2019-02-15
  • 来自专栏信数据得永生

    TensorFlow HOWTO 1.1 线性回归

    r_sqr_ = sess.run(r_sqr, feed_dict={x: x_test, y: y_test}) r_sqrs.append(r_sqr_) 每一百步打印损失和度量值 if e % 100 == 0: print(f'epoch: {e}, loss: {loss_}, r_sqr: {r_sqr_}') 得到拟合直线: x_min = , r_sqr: 0.6779371113275436 epoch: 600, loss: 0.20484664052302196, r_sqr: 0.6884008829992205 epoch: 700 , loss: 0.20163908809697076, r_sqr: 0.6955228132490906 epoch: 800, loss: 0.19975160600281744, r_sqr: , r_sqr: 0.7068004742345046 epoch: 1400, loss: 0.19777710515759306, r_sqr: 0.7070149540532822 epoch:

    58720发布于 2019-02-15
  • 来自专栏信数据得永生

    TensorFlow HOWTO 1.2 LASSO、岭和 Elastic Net

    r_sqr_ = sess.run(r_sqr, feed_dict={x: x_test, y: y_test}) r_sqrs.append(r_sqr_) 每一百步打印损失和度量值 if e % 100 == 0: print(f'epoch: {e}, loss: {loss_}, r_sqr: {r_sqr_}') 输出: epoch: 0, loss: , r_sqr: -0.4299323503419834 epoch: 400, loss: 73.34245865955972, r_sqr: 0.13473129501015224 epoch: 500 r_sqr: 0.6787325098436232 epoch: 900, loss: 24.28818622078879, r_sqr: 0.6872955402664112 epoch: 1000, , r_sqr: 0.6900280037335323 epoch: 1900, loss: 23.908710842591514, r_sqr: 0.6900276378081478 绘制训练集上的损失

    61020发布于 2019-02-15
  • 来自专栏摸鱼范式

    【UVM COOKBOOK】Sequences||Virtual Sequencers

    = virtual_sequencer::type_id::create("m_v_sqr", this); endfunction: build_phase // Connect - where failed"); end bus = v_sqr.bus; gpio = v_sqr.gpio; endtask: body endclass: virtual_sequence_base the GPIO env class gpio_env_virtual_sqr extends uvm_sequencer #(uvm_sequence_item); // .. uart_v_sqr; gpio_env_virtual_sqr gpio_v_sqr; // Low level sequencer pointer assignment: // This has ; uart_bus = uart_v_sqr.bus; gpio = gpio_v_sqr.gpio; gpio_bus = gpio_v_sqr.bus; endfunction:

    1.5K41发布于 2021-11-26
  • 来自专栏饶文津的专栏

    【HDU 5858】Hard problem(圆部分面积)

    粉色+绿色)的一半 #include <cstdio> #include <cmath> #define dd double #define sf(a) scanf("%d",&a) #define sqr int t; sf(t); while(t--){ int l; sf(l); dd h=l/sqrt(2),b=l/2.0,l2=sqr (l); dd y=h/4.0,x=y*sqrt(7); dd b2=sqr(b),a2=b2,c2=sqr(x-b)+sqr(y); dd jd=acos ((a2+b2-c2)/sqrt(a2)/b/2.0); dd s1=jd*b2; dd jd2=acos((l2+sqr(h)-a2)/l/h/2.0);

    69220发布于 2020-06-02
  • 来自专栏数据结构与算法

    洛谷P1742 最小圆覆盖(计算几何)

    std; const int MAXN = 1e5 + 10; int N; double R; struct Point { double x, y; }p[MAXN], C; double sqr (double x) { return x * x; } double dis(Point a, Point b) { return sqrt(sqr(a.x - b.x) + sqr( b = p2.y - p1.y, c = p3.x - p1.x, d = p3.y - p1.y, e = (sqr (p2.x) - sqr(p1.x) + sqr(p2.y) - sqr(p1.y)) / 2, f = (sqr(p3.x) - sqr(p1.x) + sqr(p3.y) - sqr(p1.y)) / 2; C.x = (e * d - b * f) / (a * d - b * c); C.y = (a * f - e * c) / (a * d - b *

    72910发布于 2019-03-04
  • 来自专栏AI异构

    caffe详解之优化算法

    lr * g / nd.sqrt(sqr + eps_stable) param[:] -= div RMSProp 介绍 ? in zip(params, sqrs): g = param.grad / batch_size sqr[:] = gamma * sqr + (1. - gamma , delta in zip(params, sqrs, deltas): g = param.grad / batch_size sqr[:] = rho * sqr [:] = beta2 * sqr + (1. - beta2) * nd.square(g) v_bias_corr = v / (1. - beta1 ** t) sqr_bias_corr = sqr / (1. - beta2 ** t) div = lr * v_bias_corr / (nd.sqrt(sqr_bias_corr) + eps_stable)

    64230发布于 2020-07-29
  • 来自专栏电子电路开发学习

    如何用FPGA解一道初中数学题

    = sqrt(num); for(a = 1; a <= sqr; a++) //可以设置1-46 { for(b = 1; b <= sqr; b++) rst_n) tmp_b <= 0; else if(tmp_b == SQR) tmp_b <= 0; else if(tmp_a ! = SQR) tmp_b <= tmp_b + 1; end always @ (posedge clk) begin if(! = SQR) & flag) tmp_a <= tmp_a + 1; end always @ (posedge clk) begin if(! SYSCLK; /*instance module*/ fpga_math #( .SUM(SUM), .SQR(SQR) )fpga_math_0( //inputs

    82320发布于 2021-01-03
  • 来自专栏不二鱼的芯片验证记录

    我眼中的UVM |08.virtual_sequece和virtual_sequencer

    fish_rst_sqr; fish_data_sequencer fish_data_sqr; ......endclass 问题来了,seq是怎么发送到对应的sqr的? ,如: `uvm_do_on(fish_clk_seq,p_sequencer.fish_clk_sqr); `uvm_do_on(fish_rst_seq,p_sequencer.fish_rst_sqr ); `uvm_do_on(fish_data_seq,p_sequencer.fish_data_sqr); 以上都是很常规的用法,中规中矩。 比如上面的fish_clk_seq,发给fish_clk_sqr。 答案是否定的。 这是因为,对于一些配置类的seq,或者是某些特殊的seq,并不需要具体的sqr接收,也不需要发送到dut,只是为了进行配置,或者生成某些文件。

    70620编辑于 2022-10-28
  • 来自专栏饶文津的专栏

    【UVALive 4642】Malfatti Circles(圆,二分)

    二分一个半径,可以得出另外两个半径,需要推一推公式(太久了,我忘记了) #include<cstdio> #include<cmath> #define eps (1e-8) #define sqr(a make(dd r,dd a,dd h,dd n){ dd t=r-r/tan(h)/tan(n)+a/tan(n); if(t<=eps)return -1; return sqr r0; } } int main(){ while(read()) { for(int i=0;i<3;i++) q[i].a=sqrt(sqr (q[(i+1)%3].x-q[i].x)+sqr(q[(i+1)%3].y-q[i].y));//计算边长 for(int i=0;i<3;i++) q[i]. v=acos((sqr(q[i].a)+sqr(q[(i+2)%3].a)-sqr(q[(i+1)%3].a))/2/q[i].a/q[(i+2)%3].a)/2; solve();

    35830发布于 2020-06-02
  • 来自专栏全栈程序员必看

    SGU 319 Kalevich Strikes Back(线段树扫描线)

    node &cmp)const { return h<cmp.h; } }scline[maxn<<1]; struct foo { int s,e,h; }sqr [index].h*(LL)(sqr[index].e-sqr[index].s); } void dfs(int x) { for(int s=head[x];s! [0].s=0;sqr[0].e=W;sqr[0].h=H; for(int i=1;i<=n;i++) { int x1,y1,x2,y2; ,&x1,&y1,&x2,&y2); if(x1>x2)swap(x1,x2); if(y1>y2)swap(y1,y2); sqr [i].s=x1;sqr[i].e=x2;sqr[i].h=y2-y1; scline[2*i-1].s=x1; scline[2*i-1].e=x2;

    23610编辑于 2022-07-12
  • 来自专栏优雅R

    两天研习Python基础(三) 函数

    ".format(sqr_num)) 这里,sqr_num声明在square_of_num函数内,在代码块外不能够使用 $ . /usr/bin/python3 def square_of_num(num): global sqr_num sqr_num = num * num square_of_num(5 ) print("5 * 5 = {}".format(sqr_num)) 现在,即使在函数外我们也能够使用sqr_num $ . sqr_num is still {}!".format(sqr_num)) 注意使用global sqr_num会影响函数外的sqr_num $ . sqr_num is still 4!

    65220发布于 2020-07-02
  • 来自专栏算法之名

    Tensorflow的图像操作(三)

    emb_start_idx:emb_start_idx+nrof_images] = np.NaN #all_neg = np.where(np.logical_and(neg_dists_sqr-pos_dist_sqr <alpha, pos_dist_sqr<neg_dists_sqr))[0] # FaceNet selection # 对正负样本对进行筛选,筛选的标准是负样本对与正样本对的差小于 约束的样本,所以这里是一个<,这个公式刚好和loss # 计算的公式是相反的 all_neg = np.where(neg_dists_sqr-pos_dist_sqr ] = np.NaN #all_neg = np.where(np.logical_and(neg_dists_sqr-pos_dist_sqr<alpha, pos_dist_sqr<neg_dists_sqr ,我们需要挑选那些不满足triplet loss约束的样本,所以这里是一个<,这个公式刚好和loss # 计算的公式是相反的 all_neg = np.where(neg_dists_sqr-pos_dist_sqr

    68720编辑于 2021-12-13
  • 来自专栏饶文津的专栏

    【HDU 5733】tetrahedron

    const point &b)const { return (point){y*b.z-b.y*z,b.x*z-x*b.z,x*b.y-b.x*y}; } }p[5]; dd sqr point &o,const point &s,const point &e,point &n){ point a=s-o,b=e-o; n=a^b; return sqrt(sqr (n.x)+sqr(n.y)+sqr(n.z))/2; } int main() { while(~p[1].input()){ for(int i=2;i<=4;i++)

    25820发布于 2020-06-02
  • 来自专栏贾志刚-OpenCV学堂

    只用半小时 | OpenCV手写图像模板匹配算法

    , target_sums, target_sqr_sums): self.ref_imgs = ref_imgs self.target_imgs = target_imgs self.scores = scores self.tpls_sums = tpl_sums self.tpls_sqsums = tpl_sqr_sums self.target_sums = target_sums self.target_sqsums = target_sqr_sums self.nms_boxes = s2 - s1 * s1 * sr if ss_sqr < 0: # fix issue, 精度问题 ss_sqr = 0.0 sum2 = sum_t * np.sqrt(ss_sqr) sum3 = np.sum(np.multiply(tpl_gray, target_gray

    93620编辑于 2024-06-12
  • 来自专栏python3

    【leetcode 简单】 第八十六题

    end = num while start <= end: mid = start +(end-start) //2 sqr = mid ** 2 if sqr < num: start =mid+1 elif sqr > num:

    31720发布于 2020-01-19
  • 来自专栏数据结构与算法

    洛谷P2503 [HAOI2006]均分数据(模拟退火)

    cstdio> #include<cmath> #include<ctime> #include<cstdlib> #include<algorithm> #include<cstring> #define sqr ; i++) belong[i] = rand() % M + 1, sum[ belong[i] ] += a[i]; for(int i = 1; i <= M; i++) ans += sqr sum;//找出最小的位置 int X = rand() % N + 1;//这里直接随机就可以 double Pre = ans; ans -= sqr (sum[ belong[X] ] - Aver) + sqr(sum[P] - Aver); sum[ belong[X] ] -= a[X]; sum[P] += a[X]; ans += sqr(sum[ belong[X] ] - Aver) + sqr(sum[P] - Aver); if((ans < Pre) || (exp( (ans-Pre

    67900发布于 2018-05-30
领券