-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Emergence of Complex Computational Structures From Chaotic Neural Networks Through Reward-Modulated Hebbian Learning #2
Comments
Interesting project. Have you contacted the authors of the original study yet to explain your problem? |
This is consistent with what I've seen and I was not able to resolve it with a longer learning phase. It appears that your target has a low frequency modulation. This could be because you're adding frequencies that are not integer multiples of each other (so you're getting a low frequency harmonic). This low frequency component makes the task much harder, I think. Could you try the same without the low frequency component? In the paper, they used: f(t)=(1.3/1.5)sin(2pit)+(1.3/3)sin(4pit)+(1.3/9)sin(6pit)+(1.3/3)sin(8pit) |
I've been given some pointers related to this work: |
I believe the actual label on this issue should be Machine Learning or Neural Networks |
Work to Replicate
Cereb. Cortex (2014) 24 (3): 677-690. doi: 10.1093/cercor/bhs348
Motivation
This article claims an important forward step in realistic liquid state machines, so it is an ideal study for replication. The algorithms are also simple and easy to implement.
I attempted to replicate it myself. The learning algorithm correctly produced the target time series, but the weights did not converge. So I cannot replicate the post-learning phase of Figure 1f, for example, because freezing the weights (i.e., turning off learning) causes the error to increase drastically. I carefully checked that my algorithms are exactly as described in the Methods and Supplementary Materials, but it's possible I missed something.
My replication attempt is in Matlab, so I cannot submit it to ReScience. I would like to know if someone else is able to replicate the study or has the same problems I did.
Challenges
Convergence of the weights. In the article, the algorithm is shown to be accurate "post-learning" when the weights are frozen. I am unable to get accurate output when weights are frozen.
The text was updated successfully, but these errors were encountered: