首页
学习
活动
专区
圈层
工具
发布
    • 综合排序
    • 最热优先
    • 最新优先
    时间不限
  • 来自专栏数据结构与算法

    Yet Another Minimization Problem(决策单调性 分治dp)

    给定一个长度为\(n\)的序列。你需要将它分为\(m\)段,每一段的代价为这一段内相同的数的对数,最小化代价总和。

    81120发布于 2018-10-11
  • 来自专栏CreateAMind

    硬件自由能:神经形态推理网络中联想学习的出现

    model of the world that predicts incoming sensory data while continuously updating its parameters via minimization strength—that mirrored neurophysiological observations—emerged via local (neurocentric) prediction error minimization connections or groups of neurons reduced associative learning task and changed the time course of free energy minimization In this setting, it would be interesting to investigate how the free energy minimization time-course

    41230编辑于 2022-11-22
  • 来自专栏CVer

    [计算机视觉论文速递] 2018-07-07 CVPR 图像分割专场1

    We show that CCB-Cut minimization can be relaxed into an orthogonally constrained ℓτ-minimization problem Using images from the BSDS500 database, we show that image segmentation based on CCB-Cut minimization

    71520发布于 2018-07-24
  • 来自专栏我爱计算机视觉

    CVPR2023|清华大学提出GAM:神经网络“一阶平滑优化器”,显著提升模型“泛化能力”

    近年来,神经网络收敛位置的平滑性(flatness)被证明与模型泛化能力有直接的联系,而现有对平滑性的定义仍局限于sharpness-aware minimization(SAM)及其变体的零阶平滑性 清华大学崔鹏教授的CVPR2023 Highlight论文”Gradient norm aware minimization seeks first-order flatness and improves 模型参数收敛位置的零阶平滑性与一阶平滑性 sharpness-aware minimization(SAM)[3]理论证明了平滑极值点在测试数据上的泛化误差低于尖锐极值点,并进一步提出了优化零阶平滑性, "Gradient norm aware minimization seeks first-order flatness and improves generalization." "Sharpness-aware minimization for efficiently improving generalization." In ICLR 2021, spotlight.

    1.1K50编辑于 2023-08-31
  • 来自专栏AI科技评论

    CVPR 2023 Highlight丨GAM:可泛化的一阶平滑优化器

    近年来,神经网络收敛位置的平滑性(flatness)被证明与模型泛化能力有直接的联系,而现有对平滑性的定义仍局限于 sharpness-aware minimization(SAM)及其变体的零阶平滑性 清华大学崔鹏教授的 CVPR2023 Highlight 论文”Gradient norm aware minimization seeks first-order flatness and improves 2 模型参数收敛位置的零阶平滑性与一阶平滑性 sharpness-aware minimization(SAM)[3]理论证明了平滑极值点在测试数据上的泛化误差低于尖锐极值点,并进一步提出了优化零阶平滑性 "Gradient norm aware minimization seeks first-order flatness and improves generalization." "Sharpness-aware minimization for efficiently improving generalization." In ICLR 2021, spotlight.

    48910编辑于 2023-08-08
  • 来自专栏机器学习炼丹术

    self-training | 域迁移 | source-free(第二篇)

    得到了pesudo label后,使用crossentropy损失来更新分割模型: 此外,我们还更新那些没有被选中的pixels,通过entropy minimization的方式。 Entropy minimization已经被展示有效,在半监督分割算法和domain adaptation当中。 熵最小化可以被认为是交叉熵损失函数的soft-assignment的版本。

    89620发布于 2021-11-18
  • 来自专栏小七的各种胡思乱想

    小样本利器3. 半监督最小熵正则 MinEnt & PseudoLabel代码实现

    因此要推动模型远离高密度区,可以通过提高模型在无标注样本上的预测置信度,降低预测熵值来实现,以下给出两种方案MinEnt和PseudoLabel来实现最小熵正则Entropy-Minimization Paper: Semi-supervised Learning by entropy minimization在之后很多半监督的论文中都能看到05年这边Entropy Minimization的相关引用

    1.3K31编辑于 2022-09-08
  • 来自专栏量子化学

    使用Discovery Studio执行分子对接LibDock

    Filter Ligands(Prepare Ligands); (3)Search small molecule conformations,产生9个conformations B25(1); (4) Minimization → Full Minimization,其中Input ligands 选择B25-(1):All; → Run,结束显示9 poses optimized。

    14.2K51发布于 2020-07-27
  • 来自专栏流川疯编写程序的艺术

    GraphCuts算法解析,Graphcuts算法求最大流,最小割实例

    maxflow algorithm described in An Experimental Comparison of Min-Cut/Max-Flow Algorithms for Energy Minimization In Third International Workshop on Energy Minimization Methods in Computer Vision and Pattern Recognition

    2.3K10发布于 2019-01-18
  • 来自专栏专知

    【最新】机器学习顶会 NIPS 2017 Pre-Proceedings 论文列表(附pdf下载链接)

    Doubly Accelerated Stochastic Variance Reduced Dual Averaging Method for Regularized Empirical Risk Minimization Online Linear Optimization with Approximation Algorithms Geometric Descent Method for Convex Composite Minimization PixelGAN Autoencoders Consistent Multitask Learning with Nonlinear Output Relations Fast Alternating Minimization Revisited: Faster and More General Variational Inference via \chi Upper Bound Minimization On Quadratic Highly Efficient Gradient Boosting Decision Tree Adversarial Ranking for Language Generation Regret Minimization

    3.1K90发布于 2018-04-10
  • 来自专栏机器之心

    更通用、有效,蚂蚁自研优化器WSAM入选KDD Oral

    深度神经网络(DNNs)的泛化能力与极值点的平坦程度密切相关,因此出现了 Sharpness-Aware Minimization (SAM) 算法来寻找更平坦的极值点以提高泛化能力。 Sharpness-Aware Minimization (SAM) [1] 是一种用于寻找更平坦极值点的技术,是当前最有前途的技术方向之一。 Sharpness-aware Minimization for Efficiently Improving Generalization. Surrogate Gap Minimization Improves Sharpness-Aware Training. ICLR '22. [3] Jiawei Du et al. Efficient Sharpness-aware Minimization for Improved Training of Neural Networks.

    61420编辑于 2023-10-08
  • 来自专栏新智元

    1小时赢1000美元的AI赌神是怎样炼成的?幕后团队在线答疑

    Deep Counterfactual Regret Minimization In ICML. Deep Counterfactual Regret Minimization (https://arxiv.org/pdf/1811.00164.pdf). Regret Minimization in Behaviorally-Constrained Zero-Sum Games. Dynamic Thresholding and Pruning for Regret Minimization. Strategy-Based Warm Starting for Regret Minimization in Games. In AAAI.

    1.2K50发布于 2019-07-23
  • 来自专栏全栈程序员必看

    mix的中文是什么_mix是最小的意思吗

    《mixup:BEYOND EMPIRICAL RISK MINIMIZATION》 2017(ICLR2018),Hongyi Zhang et al. 参考Vicinal Risk Minimization 和 [1506.08700] Dropout as data augmentation Q: label线性加权后,不是得到了这两个样本中间的类别了吗

    92510编辑于 2022-11-09
  • 来自专栏3D视觉从入门到精通

    最新综述:激光雷达感知深度的域适应方法

    最常见的baseline是entropy minimization方法,其他还有CyCADA,FeaDA和OutDA。 domain-invariant feature learning方法如图:基本是两个原则的方法分类,Divergence Minimization和Discriminator-based 方法。

    91490发布于 2021-07-29
  • 来自专栏AI算法与图像处理

    ECCV2022 | 光流的半监督学习,精度更高!代码开源!论文速递2022.10.12!

    Evolving Class Ontology 论文/Paper: http://arxiv.org/pdf/2210.04993 代码/Code: None Make Sharpness-Aware Minimization 论文/Paper: http://arxiv.org/pdf/2210.05177 代码/Code: https://github.com/mi-peng/sparse-sharpness-aware-minimization

    65620编辑于 2022-12-11
  • 来自专栏CreateAMind

    结合代码讲解VAE-GAN比较透彻的一篇文章

    Train Encoder on minimization of: kullback_leibler_loss(z_x, gaussian) mean_squared_error(l_x_tilde_, l_x) Train Generator on minimization of: kullback_leibler_loss(z_x, gaussian) mean_squared_error(l_x_tilde _, l_x) -1*log(d_x_p) Train Discriminator on minimization of: -1*log(d_x) + log(1 - d_x_p) ?

    10.6K42发布于 2018-07-25
  • 来自专栏算法和应用

    基于LP松弛的客观稳健离散优化的黑盒削减问题

    Objective-robust Discrete Optimization Problems Based on their LP-Relaxations 原文摘要:We consider robust discrete minimization

    85320发布于 2019-07-18
  • 来自专栏AI 算法笔记

    【每周CV论文推荐】 CV领域中数据增强相关的论文推荐

    Facebook人工智能研究院和MIT在“Beyond Empirical Risk Minimization”中提出的Mixup与之类似。 文章引用量:6000+ 推荐指数:✦✦✦✦✧ ? research, 2002, 16: 321-357. [4] Zhang H, Cisse M, Dauphin Y N, et al. mixup: Beyond empirical risk minimization

    1.2K30发布于 2019-08-21
  • 来自专栏全栈程序员必看

    amos中路径p值_输出无向图的路径

    Mahalanobis distance) 2.7 Sample Moments 2.8 Notes for Model 2.9 Estimates 2.10 Modification Indices 2.11 Minimization 修改索引大于指定阈值的每个参数将显示在此处,并在标记为的列中显示:   “M.I”:修改索引   “Par Change”:估计参数变化 2.11 Minimization History   “ Minimization History”表示每一次迭代中,误差函数的数值。 其对应着当初“Output”中我们勾选的“Minimization history”选项。

    2.8K20编辑于 2022-09-23
  • 来自专栏CreateAMind

    disentangled-representation-papers

    Disentangled Representations via Synergy Minimization (Oct, Steeg et. al.) [paper] ? [paper] ** Learning Factorial Codes By Predictability Minimization (1992, Schmidhuber) [paper] *** Self-Organization

    2K20发布于 2018-09-27
领券