Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Emergence of Complex Computational Structures From Chaotic Neural Networks Through Reward-Modulated Hebbian Learning #2

Open
ghost opened this issue Oct 21, 2016 · 8 comments

Comments

@ghost
Copy link

ghost commented Oct 21, 2016

Work to Replicate

Cereb. Cortex (2014) 24 (3): 677-690. doi: 10.1093/cercor/bhs348

Motivation

This article claims an important forward step in realistic liquid state machines, so it is an ideal study for replication. The algorithms are also simple and easy to implement.

I attempted to replicate it myself. The learning algorithm correctly produced the target time series, but the weights did not converge. So I cannot replicate the post-learning phase of Figure 1f, for example, because freezing the weights (i.e., turning off learning) causes the error to increase drastically. I carefully checked that my algorithms are exactly as described in the Methods and Supplementary Materials, but it's possible I missed something.

My replication attempt is in Matlab, so I cannot submit it to ReScience. I would like to know if someone else is able to replicate the study or has the same problems I did.

Challenges

Convergence of the weights. In the article, the algorithm is shown to be accurate "post-learning" when the weights are frozen. I am unable to get accurate output when weights are frozen.

@sje30
Copy link

sje30 commented Oct 24, 2016

Interesting project.

Have you contacted the authors of the original study yet to explain your problem?

@x75
Copy link

x75 commented Oct 24, 2016

hi, haven't precisely replicated all the experiments described in the paper but have played with the signal generation task using python. i remember that the matlab code for the FORCE paper helped a lot. i briefly fired up my code, it seems to do something, probably needs longer learning phase and some parameter tuning, see the gfx attached
eh_siggen

@ghost
Copy link
Author

ghost commented Nov 10, 2016

This is consistent with what I've seen and I was not able to resolve it with a longer learning phase.

It appears that your target has a low frequency modulation. This could be because you're adding frequencies that are not integer multiples of each other (so you're getting a low frequency harmonic). This low frequency component makes the task much harder, I think. Could you try the same without the low frequency component? In the paper, they used: f(t)=(1.3/1.5)sin(2pit)+(1.3/3)sin(4pit)+(1.3/9)sin(6pit)+(1.3/3)sin(8pit)
which has period 1

@x75
Copy link

x75 commented Nov 11, 2016

hey, thanks for poking me. you're right, non-integer freq relations of the sine components lead to longer periods. so, i ran it again using the original target function from the paper which you suggest. below are two experiments of lengths a) 100K timesteps (100s), and b) for 400K steps (400s). washout ratio is 0.1 and testing ratio is 0.2, so net training is length * 0.7. Also activated decaying \eta as in the original paper.

rescience_res_1000_mso4_train100000_c
a) Running 100K steps, training 70K steps

rescience_res_1000_mso4_train400000_c
b) Same as above but running over 400K/280K. Bottom panel is same as second from top only different zoom indicating long term stability.

Last one looks much better in it's freerunning approximation of the target while observing the expected global drift. Also freerunning seems stable over 80s (bottom).

@rougier
Copy link
Member

rougier commented Apr 14, 2017

@x75
Copy link

x75 commented May 4, 2017

@rougier thx for the pointers, need to check them out, also having the miconi paper somewhere on my stack.

anyway, finally managed to push the code that generated the plots above to the smp_base repo.

@Adrianzo
Copy link

Adrianzo commented Nov 7, 2017

I believe the actual label on this issue should be Machine Learning or Neural Networks

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants