Biophysical Neural Network Simulator


<Only Inference>


1. Simulation Run

>> ./home/dkim785/Tools/CUDA/INF_Neuron_Simulatior_HPC/Debug/Run the INF_Neuron_Simulator_HPC


2. Type '8'


3. Run the Matlab Code (/home/dkim785/Tools/CUDA/INF_Neuron_Simulator_HPC/Debug/mass_process_read_potential_2.m


4. Symulation Run and Type '7'


5. You can see the result.


<How to change the parameters?>


1. For the first try, I changed the 'Debug/device2_output_network.txt'.


2. Just copied it, and open.


3. Inside this file, you can see the parameters, and weights.


4. Order is (This neuron's number // 2 // LIF_b // LIF_th // LIF_reset // 20.000000 // LIF_c // LIF_a // -65.060000 // 0.000 // 0.000 ; From, Weight, From, Weight ...)


5. Change the parameters.


6. If it is not work, we should change the weights. (You should learning)


<Learning>


1. Run the INF_Neruon_Simulator_HPC


2. Type '4'


Yun's Matlab Simulator


< Yun's Code>


1. Check the parameters by run model_compare_tuned.m


2. Inside the model_compare_tuned.m code, there is

1. Basic principles

2. Micro-level (sentences)

-> 대부분의 시간을 여기에 쓸 것임.

3. Meso-Level (paragraph)

4. Macro-level


* Coherence(일관성) -> Logical

* Selection of ideas. 어떤 정보가 포함되고, 배제될 것인가?

* Organization of ideas. 

* Cohesion(결합) -> Connection

* How are ideas linguistically connect to one another?

* 4 Different way to make cohesion

* Repetition

* Repetition: Exact Word

* ~~ materials, materials ~~

* Repetition: With a difference

* ~~to increase~~. ~~ increasing ~~. ~~increase~~

* Repetition: Synonym or Other Words Representation

* This ::article:: does not ~~, nor is ::it:: a compare~~. Rather, the following ::text:: identifies ~~.

* 무엇이 가장 흔한가? -> Repetition: Exact Word.

* Transition words or phrases (So, Therefore, However, etc…)

* Linking Words (List가 Canvas에 업로드 되어 있음)

* Conjunctions

* Punctuation (:, ;)

* 얘들은 cohesive type of device inside the sentence.(Micro-level)

* Parallel Structure

* such as (i) promoting ~~; (ii) allowing ~~; (iii) offering ~~. 이렇게 하면 3가지에 대해 이야기하고 이 세가지가 모두 동등하다는 것을 알 수 있음.


-> 그렇기 때문에, good coherence + bad cohesion과 vice versa is possible.

이것 둘 다 좋도록 신경써야 한다.


* 3 Types of information

* Primary Info (메인 주제, 내가 한 것)

* ‘::we::’, ‘::our research::’ 을 쓰면서 이야기.

* ‘::This may be an issue of:: XYZ ::but may also be more telling about:: ABC

* ::Hence, understanding:: XYZ ::can help provide:: ABC

* 위와 같은 것을 쓰면 primary info라는 것을 알려줌.

* Secondary Info (다른 사람들 것을 refer)

* Reference 쓰고 [1] 이런것 쓰면 됨.

* Common Knowledge 

* 나나 다른사람의 research에 대한 것은 아니지만, 모든 사람들이 common sense로 알고 있는 것. 예를 들어서 Smoking 은 Cancer를 유발한다. 예전에는 리서치였겠지만 요새는 그냥 common knowledge


-> Reader가 이 세가지를 모두 한 눈에 구별하면서 읽을 수 있도록 해야 한다. 그렇지 않으면 plagiarism이 될 수 있다.


- HW - 메일로 받은 자료에 highlight 하기.

Yellow: Secondary Info

Green: Common Knowledge

Primary Info는 냅두기.

- Open DC SHELL


>> dc_shell





- 내부에서 tcl file 실행할 때

DC_SHELL>> source ./~.tcl





- 만약 tcsh에서 dc_shell 명령어를 쓰고 싶다면(예를 들어서 python에서 dc_shell 명령어를 실행) -f option을 이용한다.


>>dc_shell -f ./~.tcl


여기서 ~.tcl file의 제일 밑에 exit도 같이 입력해주면, 해당 tcl file만 dc_shell 내부에서 실행하고 바로 빠져나온다.






- 해당 library에 scale/unit을 알고 싶다면


DC_SHELL>>report_units








'USA Life' 카테고리의 다른 글

Unix Zip/UnZip  (0) 2019.02.01
[Error] github HTTP request failed  (0) 2018.10.18
Linux 용량 확인 명령어  (0) 2018.10.12
rm -rf 가 먹히지 않을 때.  (0) 2018.10.12
Make cd automatically ls  (0) 2018.09.22


# STDP, wikipedia

#research #SNN #STDP


Spiking-timing-dependent plasticity(STDP) is biological process that adjusts the strength of connections between neurons in the brain. This process adjusts the connection strengths based on the relative timing of a particular neuron’s output and input spikes. If the input spike from Neuron A occur immediately before that neuron B’s output spike, then that particular input is made somewhat stronger the synapse between Neuron A and Neuron B. If the input spike occur after an output spike, then that particular input is made somewhat weaker. (Wikipedia)


With STDP, repeated presynaptic spike arrival a few milliseconds before postsynaptic spike leads in many synapse types to Long-Term Potentiation (LTP) of the synapses, whereas repeated spike arrival after postsynaptic spikes leads to Long-Term Depression (LTD) of the same synapse. The change of the synapse plotted as a function of the relative timing of pre- and postsynaptic action potentials is called the STDP function or learning window and varies between synapse types.

[image:6F56A2F7-212C-46AE-8AB7-4BB944050328-13930-000260942B8E46BB/STDP_learning_window.JPEG]

Figure 1: Spike-Timing Dependent Plasticity (schematic): The STDP function shows the change of synaptic connections as a function of the relative timing of pre- and postsynaptic spikes after 60 spike parings. Schematically redrawn after Bi and Poo (1998)


When weight increases, the amount of increase is related to the current weight.


<Experimental STDP Protocol>

The paring is repeated for 50-100 times at a fixed frequency (for example 10 Hz). (이것은 Neuron의 firing frequency가 0.5~1kHz쯤 된다는 것인가?) The weight of the synapse is measured as the amplitude (or initial slope) of the postsynaptic potential. 


<Basic STDP Model>

The weight change delta w_j of a synapse from a presynaptic neuron j depends on the relative timing between presynaptic spike arrivals and postsynaptic spikes. Let us name the presynaptic spike arrival times at synapse j by t_j^f where 1, 2, 3, … counts the presynaptic spikes. Similarly, t_i^n with n = 1, 2, 3, … labels the firing times of the postsynaptic neuron. The total weight change delta w_j induced by a stimulation protocol with pairs of pre-and postsynaptic spikes is then (Gerstner and al. 1996, Kempter et al. 1999)

[image:66CCECA4-C660-475A-A1D1-D9C463F5F5AB-13930-000261A3D76D62FE/FC770EDA-98FA-4DD0-B957-A02CFA95AFBC.png]


의문점) 그럼 만약에 postsynaptic spiking 전에 presynaptic spiking이 2번 있었으면 나중의 것으로 생각해서 계산하는 것인가? -> 다양한 버전이 존재한다. 전체를 모두 고려하는 all-to-all 혹은 제일 가까운 애만 고려하는 nearest 같은 것도 존재하고.


where W(x) denotes one of the STDP functions (also called learning window)

A popular choice for the STDP function W(x)

[image:8B313596-6CAB-463B-90BC-635900C43535-13930-000261BA7801BC5F/419F7E0C-D679-4451-9EE3-3D1EAC4D5D64.png]

Which has been used in fits to experimental data (Zhang et al. 1998) and models (e.g, Song et al. .2000). The parameters A_+ and A_- may depend on the current value of the synaptic weight w_j. The time constants are on the order of taw_+ = 10ms and tow_- = 10ms

https://help.github.com/enterprise/2.13/user/articles/adding-a-new-ssh-key-to-your-github-account/

'USA Life' 카테고리의 다른 글

Unix Zip/UnZip  (0) 2019.02.01
dc_shell  (0) 2018.11.01
Linux 용량 확인 명령어  (0) 2018.10.12
rm -rf 가 먹히지 않을 때.  (0) 2018.10.12
Make cd automatically ls  (0) 2018.09.22



df -h



du -h



du -sh *

해당 directory 밑에 한 층만 보여줌. 


du -a

파일 단위로 사용량 보여줌

'USA Life' 카테고리의 다른 글

Unix Zip/UnZip  (0) 2019.02.01
dc_shell  (0) 2018.11.01
[Error] github HTTP request failed  (0) 2018.10.18
rm -rf 가 먹히지 않을 때.  (0) 2018.10.12
Make cd automatically ls  (0) 2018.09.22

https://unix.stackexchange.com/questions/11238/how-to-get-over-device-or-resource-busy


how to get over device or resource busy


lsof +D /path

를 입력해서 /path 밑에서 아직 활동하고 있는 애들을 찾아낸 뒤에,

kill 명령어를 통해서 해당 process를 꺼주자.

그러면 rm -rf 가 먹힌다.


'USA Life' 카테고리의 다른 글

Unix Zip/UnZip  (0) 2019.02.01
dc_shell  (0) 2018.11.01
[Error] github HTTP request failed  (0) 2018.10.18
Linux 용량 확인 명령어  (0) 2018.10.12
Make cd automatically ls  (0) 2018.09.22

+ Recent posts