首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >JuMP:期望值问题,typeError:在类型断言中,预期的float64,使用autodiff = true的got ForwardDiff.Dual和exp()的问题

JuMP:期望值问题,typeError:在类型断言中,预期的float64,使用autodiff = true的got ForwardDiff.Dual和exp()的问题
EN

Stack Overflow用户
提问于 2022-04-28 05:34:35
回答 1查看 166关注 0票数 0

因此,我试着根据我编写的一段更复杂的代码,做一个最小的例子来问问题:

  1. --我所遇到的一个巨大的错误是期望float64,而不是ForwardDiff.Dual --谁能给我一个提示,一般来说,我总是确保避免这个错误。我觉得每次我做一个新的优化问题,我必须重新发明车轮,以使它成为away
  2. Apparently,你不能自动删除julia ()函数吗?有人知道如何使它工作吗?

  1. ,A解决办法是,我做了一个有限和,用泰勒级数来逼近它。在我的一个函数中,如果我有20个术语,自动关闭是有效的,但是它不够精确--所以我选择了40个术语,但是朱莉娅告诉我做阶乘(大(K)),然后当我尝试用autodiff做这个的时候,它现在不起作用了--有谁能解决这个问题?

任何建议都将不胜感激!

代码语言:javascript
复制
using Cubature
    
    using Juniper
    using Ipopt
    using JuMP
    using LinearAlgebra 
    using Base.Threads
    using Cbc
    using DifferentialEquations
    using Trapz
    function mat_exp(x::AbstractVector{T},dim,num_terms,A) where T
    
        sum = zeros(Complex{T},(dim,dim))
        A[1,1] = A[1,1]*x[1]
        A[2,2] = A[2,2]*x[2]
    
       return exp(A)-1
    end
    
    function exp_approx_no_big(x::AbstractVector{T},dim,num_terms,A) where T
    
        sum = zeros(Complex{T},(dim,dim))
        A[1,1] = A[1,1]*x[1]
        A[2,2] = A[2,2]*x[2]
    
        for k=0:num_terms-1
        
        sum  = sum + (1.0/factorial(k))*A^k
        end
    
        return norm(sum)-1
    end
    function exp_approx_big(x::AbstractVector{T},dim,num_terms,A) where  T
    
        sum = zeros(Complex{T},(dim,dim))
        A[1,1] = A[1,1]*x[1]
        A[2,2] = A[2,2]*x[2]
    
        for k=0:num_terms-1
        
        sum  = sum + (1.0/factorial(big(k)))*A^k
        end
    
        return norm(sum)-1
    
    
    end
    
    
    
    
    optimizer = Juniper.Optimizer
    nl_solver= optimizer_with_attributes(Ipopt.Optimizer, "print_level" => 0)
    mip_solver = optimizer_with_attributes(Cbc.Optimizer, "logLevel" => 0, "threads"=>nthreads())
    m = Model(optimizer_with_attributes(optimizer, "nl_solver"=>nl_solver, "mip_solver"=>mip_solver))
    
    @variable(m, 0.0<=x[1:2]<=1.0)
    dim=5
    A=zeros(Complex,(dim,dim))
    for k=1:dim
    A[k,k]=1.0
    end
    println(A)
    
    
    
    f(x...) = exp_approx_no_big(collect(x),dim,20,A)
    g(x...) = exp_approx_big(collect(x),dim,40,A)
    h(x...) = mat_exp(collect(x),dim,20,A)
    register(m, :f, 2, f; autodiff = true)
    @NLobjective(m, Min, f(x...))
    
    
    optimize!(m)
    
    
    println(JuMP.value.(x))
    println(JuMP.objective_value(m))
    println(JuMP.termination_status(m))
EN

回答 1

Stack Overflow用户

发布于 2022-04-28 06:56:56

您的mat_exp函数有很多问题:

  • 在原地修改A,因此重复调用不会执行您认为它返回的
  • exp(x) - 1,这是一个矩阵。norm(exp(x)) - 1
  • But只支持标量调用
  • 您可能意味着JuMP ForwardDiff不支持通过exp

进行区分

代码语言:javascript
复制
julia> using ForwardDiff

julia> function mat_exp(x::AbstractVector{T}) where {T}
           A = zeros(Complex{T}, (dim, dim))
           for k = 1:dim
               A[k, k] = one(T)
           end
           A[1, 1] = A[1, 1] * x[1]
           A[2, 2] = A[2, 2] * x[2]
           return norm(exp(A)) - one(T)
       end
mat_exp (generic function with 3 methods)

julia> ForwardDiff.gradient(mat_exp, [0.5, 0.5])
ERROR: MethodError: no method matching exp(::Matrix{Complex{ForwardDiff.Dual{ForwardDiff.Tag{typeof(mat_exp), Float64}, Float64, 2}}})
Closest candidates are:
  exp(::StridedMatrix{var"#s832"} where var"#s832"<:Union{Float32, Float64, ComplexF32, ComplexF64}) at /Users/julia/buildbot/worker/package_macos64/build/usr/share/julia/stdlib/v1.6/LinearAlgebra/src/dense.jl:557
  exp(::StridedMatrix{var"#s832"} where var"#s832"<:Union{Integer, Complex{var"#s831"} where var"#s831"<:Integer}) at /Users/julia/buildbot/worker/package_macos64/build/usr/share/julia/stdlib/v1.6/LinearAlgebra/src/dense.jl:558
  exp(::Diagonal) at /Users/julia/buildbot/worker/package_macos64/build/usr/share/julia/stdlib/v1.6/LinearAlgebra/src/diagonal.jl:603
  ...
Stacktrace:
 [1] mat_exp(x::Vector{ForwardDiff.Dual{ForwardDiff.Tag{typeof(mat_exp), Float64}, Float64, 2}})
   @ Main ./REPL[34]:8
 [2] vector_mode_dual_eval!(f::typeof(mat_exp), cfg::ForwardDiff.GradientConfig{ForwardDiff.Tag{typeof(mat_exp), Float64}, Float64, 2, Vector{ForwardDiff.Dual{ForwardDiff.Tag{typeof(mat_exp), Float64}, Float64, 2}}}, x::Vector{Float64})
   @ ForwardDiff ~/.julia/packages/ForwardDiff/jJIvy/src/apiutils.jl:37
 [3] vector_mode_gradient(f::typeof(mat_exp), x::Vector{Float64}, cfg::ForwardDiff.GradientConfig{ForwardDiff.Tag{typeof(mat_exp), Float64}, Float64, 2, Vector{ForwardDiff.Dual{ForwardDiff.Tag{typeof(mat_exp), Float64}, Float64, 2}}})
   @ ForwardDiff ~/.julia/packages/ForwardDiff/jJIvy/src/gradient.jl:106
 [4] gradient(f::Function, x::Vector{Float64}, cfg::ForwardDiff.GradientConfig{ForwardDiff.Tag{typeof(mat_exp), Float64}, Float64, 2, Vector{ForwardDiff.Dual{ForwardDiff.Tag{typeof(mat_exp), Float64}, Float64, 2}}}, ::Val{true})
   @ ForwardDiff ~/.julia/packages/ForwardDiff/jJIvy/src/gradient.jl:19
 [5] gradient(f::Function, x::Vector{Float64}, cfg::ForwardDiff.GradientConfig{ForwardDiff.Tag{typeof(mat_exp), Float64}, Float64, 2, Vector{ForwardDiff.Dual{ForwardDiff.Tag{typeof(mat_exp), Float64}, Float64, 2}}}) (repeats 2 times)
   @ ForwardDiff ~/.julia/packages/ForwardDiff/jJIvy/src/gradient.jl:17
 [6] top-level scope
   @ REPL[35]:1

我也不知道你为什么要使用Juniper,或者你安装了一堆其他的软件包。

如果您想对此进行讨论,请加入社区论坛:https://discourse.julialang.org/c/domain/opt/13。(这比堆叠溢出好得多。)有人可能会有建议,但我不知道在Julia中有一个AD工具可以通过矩阵指数来区分。

票数 0
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/72038475

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档