-
-
Notifications
You must be signed in to change notification settings - Fork 203
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Physics informed neural operator ode #806
Changes from 50 commits
9988925
5fc72b9
25e4764
5888e12
b6ae2b4
ad0902a
c7b10f8
911ec4e
a4ff413
cbf01af
db50090
a08af21
60cda53
5781cb5
b562ebe
eb2e9c1
ca2b802
96057a3
5b8d226
c711b0d
2ad84ac
a71c469
fb73003
6c524ab
2addf6c
c058f5e
48c762b
671cedc
fe8a819
2c87492
3905b14
652fb72
f74a46f
c163e28
b52c4c2
b829265
9870858
c5e7348
cefb901
10169cb
2f2be69
c38a78f
1e38676
8a98880
2cc1d1f
f873541
0b4de35
68e38d5
345040b
60005c8
fbef5c7
0fb7d23
b6639ea
cc1fada
bb0863b
ea7c638
09f4891
188ceec
eb005c6
d754e30
214b178
7d81063
f5e2b06
30a5134
2818f34
8895693
06ce517
7953468
f6f0405
e16b168
596a80a
445dfd0
d7b7a36
fe40cfd
e45011e
7fe3244
839e1d5
53a52f8
a39bb69
2983c18
885aa29
38a872e
28447b6
1292900
10e08c2
61fa804
99e8e2d
c01c0ad
1a8fd17
c5bf9ca
c5d2501
fc38b50
09e5555
8ae7b0f
15e6124
efd2a40
89fc2d9
5ab8a6a
433e529
c54d62f
d99062d
0bc28da
2b45d10
d5b9d6d
7059e37
c4092b5
3ae25ce
908ca8a
13f955b
3813b1b
5be5db1
d0665ec
5295ceb
cc67c31
cf5221c
a3dbfc6
0cdd8dc
bc32c45
6c49dd8
ff04565
2366840
b1209ab
9fa2162
86677d0
6813c5d
786035f
5428b01
60c3995
90dfdb0
2d7be46
e342a7d
000d8b5
5e9a025
e0cb528
f1b3a36
ca24323
f74cf63
885a72e
3baa5a6
de008b3
356083b
f59f225
526fd0c
d3ae594
1925aaf
6dd9e38
6748feb
2967a4a
36226f9
22f5144
18e45d2
aef24e3
501496e
e8ac7f5
11c67da
35d8346
c5a456a
b6609f9
0617f42
9c857a5
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -26,6 +26,7 @@ jobs: | |
- Logging | ||
- Forward | ||
- DGM | ||
- ODEPINO | ||
- NNODE | ||
- NeuralAdapter | ||
- IntegroDiff | ||
|
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,11 @@ | ||
# Physics-Informed Neural operator for solve ODEs | ||
KirillZubov marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
||
```@docs | ||
PINOODE | ||
``` | ||
|
||
```@docs | ||
DeepONet | ||
``` | ||
|
||
|
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,66 @@ | ||
# Physics Informed Neural Operator for ODEs Solvers | ||
KirillZubov marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
||
This tutorial provides an example of how to use the Physics Informed Neural Operator (PINO) for solving a family of parametric ordinary differential equations (ODEs). | ||
|
||
## Operator Learning for a family of parametric ODE. | ||
KirillZubov marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
||
In this section, we will define a parametric ODE and solve it using a PINO. The PINO will be trained to learn the mapping from the parameters of the ODE to its solution. | ||
KirillZubov marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
||
```@example pino | ||
using Test | ||
KirillZubov marked this conversation as resolved.
Show resolved
Hide resolved
|
||
using OrdinaryDiffEq, OptimizationOptimisers | ||
using Lux | ||
using Statistics, Random | ||
using NeuralPDE | ||
|
||
equation = (u, p, t) -> cos(p * t) | ||
tspan = (0.0f0, 1.0f0) | ||
KirillZubov marked this conversation as resolved.
Show resolved
Hide resolved
|
||
u0 = 1.0f0 | ||
prob = ODEProblem(equation, u0, tspan) | ||
|
||
# Define the architecture of the neural network that will be used as the PINO. | ||
branch = Lux.Chain( | ||
Lux.Dense(1, 10, Lux.tanh_fast), | ||
Lux.Dense(10, 10, Lux.tanh_fast), | ||
Lux.Dense(10, 10)) | ||
trunk = Lux.Chain( | ||
Lux.Dense(1, 10, Lux.tanh_fast), | ||
Lux.Dense(10, 10, Lux.tanh_fast), | ||
Lux.Dense(10, 10, Lux.tanh_fast)) | ||
deeponet = NeuralPDE.DeepONet(branch, trunk; linear = nothing) | ||
|
||
bounds = (p = [0.1f0, pi],) | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Having more than one parameters would be more illustrative of it's use case. Right now branch and trunk both have size one input and that makes it potentially confusing to the user for how to modify this demo towards a case with more parameters There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Ok, I will do. It will take some time to implement this feature for many parameters. |
||
db = (bounds.p[2] - bounds.p[1]) / 50 | ||
dt = (tspan[2] - tspan[1]) / 40 | ||
strategy = NeuralPDE.GridTraining([db, dt]) | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Don't use grid training There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Ok, I will support also QuasiRandomTraining for PINO ODE and use it in Doc example There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. StochasticTraining* |
||
opt = OptimizationOptimisers.Adam(0.03) | ||
alg = NeuralPDE.PINOODE(deeponet, opt, bounds; strategy = strategy) | ||
sol = solve(prob, alg, verbose = false, maxiters = 2000) | ||
predict = sol.u | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. sol.original is needed to explain |
||
``` | ||
|
||
Now let's compare the prediction from the learned operator with the ground truth solution which is obtained by analytic solution the parametric ODE. Where | ||
Compare prediction with ground truth. | ||
KirillZubov marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
||
```@example pino | ||
using Plots | ||
KirillZubov marked this conversation as resolved.
Show resolved
Hide resolved
|
||
# Compute the ground truth solution for each parameter | ||
ground_analytic = (u0, p, t) -> u0 + sin(p * t) / (p) | ||
p_ = bounds.p[1]:strategy.dx[1]:bounds.p[2] | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Not using grid training will make this more compelling since it should predict at new parameters, not ones trained on There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. yes,I agree |
||
p = reshape(p_, 1, size(p_)[1], 1) | ||
ground_solution = ground_analytic.(u0, p, sol.t.trunk) | ||
|
||
# Plot the predicted solution and the ground truth solution as a filled contour plot | ||
# sol.u[1, :, :], represents the predicted solution for each parameter value and time | ||
plot(predict[1, :, :], linetype = :contourf) | ||
plot!(ground_solution[1, :, :], linetype = :contourf) | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. You don't show how to generate the solution at new parameters which is the key to the pini interface There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. as new parameters you means some another mesh that don't use for training but in the same boundary of parameters? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. yes |
||
``` | ||
|
||
```@example pino | ||
# 'i' is the index of the parameter 'p' in the dataset | ||
i = 20 | ||
# 'predict' is the predicted solution from the PINO model | ||
plot(predict[1, i, :], label = "Predicted") | ||
# 'ground' is the ground truth solution | ||
plot!(ground_solution[1, i, :], label = "Ground truth") | ||
``` |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,101 @@ | ||
abstract type NeuralOperator <: Lux.AbstractExplicitLayer end | ||
|
||
""" | ||
DeepONet(branch,trunk) | ||
""" | ||
|
||
""" | ||
DeepONet(branch,trunk,linear=nothing) | ||
|
||
`DeepONet` is differential neural operator focused for solving physic-informed parametric ODEs. | ||
|
||
DeepONet uses two neural networks, referred to as the "branch" and "trunk", to approximate | ||
the solution of a differential equation. The branch network takes the spatial variables as | ||
input and the trunk network takes the temporal variables as input. The final output is | ||
the dot product of the outputs of the branch and trunk networks. | ||
|
||
DeepONet is composed of two separate neural networks referred to as the "branch" and "trunk", | ||
respectively. The branch net takes on input represents a function evaluated at a collection | ||
of fixed locations in some boundsand returns a features embedding. The trunk net takes the | ||
continuous coordinates as inputs, and outputs a features embedding. The final output of the | ||
DeepONet, the outputs of the branch and trunk networks are merged together via a dot product. | ||
|
||
## Positional Arguments | ||
* `branch`: A branch neural network. | ||
* `trunk`: A trunk neural network. | ||
|
||
## Keyword Arguments | ||
* `linear`: A linear layer to apply to the output of the branch and trunk networks. | ||
|
||
## Example | ||
|
||
```julia | ||
branch = Lux.Chain( | ||
Lux.Dense(1, 10, Lux.tanh_fast), | ||
Lux.Dense(10, 10, Lux.tanh_fast), | ||
Lux.Dense(10, 10)) | ||
trunk = Lux.Chain( | ||
Lux.Dense(1, 10, Lux.tanh_fast), | ||
Lux.Dense(10, 10, Lux.tanh_fast), | ||
Lux.Dense(10, 10, Lux.tanh_fast)) | ||
linear = Lux.Chain(Lux.Dense(10, 1)) | ||
|
||
deeponet = DeepONet(branch, trunk; linear= linear) | ||
|
||
a = rand(1, 50, 40) | ||
b = rand(1, 1, 40) | ||
x = (branch = a, trunk = b) | ||
θ, st = Lux.setup(Random.default_rng(), deeponet) | ||
y, st = deeponet(x, θ, st) | ||
``` | ||
|
||
## References | ||
* Lu Lu, Pengzhan Jin, George Em Karniadakis "DeepONet: Learning nonlinear operators for identifying differential equations based on the universal approximation theorem of operators" | ||
* Sifan Wang "Learning the solution operator of parametric partial differential equations with physics-informed DeepOnets" | ||
""" | ||
struct DeepONet{L <: Union{Nothing, Lux.AbstractExplicitLayer }} <: NeuralOperator | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This should really be living in NeuralOperators.jl. cc @avik-pal There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Yes, agreed There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I just implemented it for tests because NeuralOperators.jl was outdate and couldn't used it. Yes agree, it should relocate to NeuralOperators.jl There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Should it be NeuralOperators.jl or LuxNeuralOperators.jl? I see SciML/NeuralOperators.jl#5 implementing Deeponet There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. yes, Lux. I haven't known there is already one like this There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I can do this task: move DeepOnet from here to LuxNeuralOperators There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Basic NO and DeepONet are now in LuxNeuralOperators.jl |
||
branch::Lux.AbstractExplicitLayer | ||
trunk::Lux.AbstractExplicitLayer | ||
linear::L | ||
end | ||
|
||
function DeepONet(branch, trunk; linear=nothing) | ||
DeepONet(branch, trunk, linear) | ||
end | ||
|
||
function Lux.setup(rng::AbstractRNG, l::DeepONet) | ||
branch, trunk, linear = l.branch, l.trunk, l.linear | ||
θ_branch, st_branch = Lux.setup(rng, branch) | ||
θ_trunk, st_trunk = Lux.setup(rng, trunk) | ||
θ = (branch = θ_branch, trunk = θ_trunk) | ||
st = (branch = st_branch, trunk = st_trunk) | ||
if linear !== nothing | ||
θ_liner, st_liner = Lux.setup(rng, linear) | ||
θ = (θ..., liner = θ_liner) | ||
st = (st..., liner = st_liner) | ||
end | ||
θ, st | ||
end | ||
|
||
Lux.initialstates(::AbstractRNG, ::DeepONet) = NamedTuple() | ||
|
||
@inline function (f::DeepONet)(x::NamedTuple, θ, st::NamedTuple) | ||
x_branch, x_trunk = x.branch, x.trunk | ||
branch, trunk = f.branch, f.trunk | ||
st_branch, st_trunk = st.branch, st.trunk | ||
θ_branch, θ_trunk = θ.branch, θ.trunk | ||
out_b, st_b = branch(x_branch, θ_branch, st_branch) | ||
out_t, st_t = trunk(x_trunk, θ_trunk, st_trunk) | ||
if f.linear !== nothing | ||
linear = f.linear | ||
θ_liner, st_liner = θ.liner, st.liner | ||
# out = sum(out_b .* out_t, dims = 1) | ||
out_ = out_b .* out_t | ||
out, st_liner = linear(out_, θ_liner, st_liner) | ||
out = sum(out, dims = 1) | ||
return out, (branch = st_b, trunk = st_t, liner = st_liner) | ||
else | ||
out = sum(out_b .* out_t, dims = 1) | ||
return out, (branch = st_b, trunk = st_t) | ||
end | ||
end |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you name it
PINOODE
for consistency?