awExperiment /

Filename Size Date modified Message
misc
60 B
1st
4.0 KB
README.md を Bitbucket 上でオンラインで編集しました。
324 B
add animation and fix graph title
1.7 KB
add corpyright
253.4 KB
add japanse doc
7.6 KB
change default parameter,and add commnet
7.5 KB
add corpyright
1.8 KB
add corpyright
1.1 KB
add corpyright
4.9 KB
add corpyright
1.0 KB
add corpyright
3.5 KB
add corpyright
2.2 KB
add corpyright
889 B
add corpyright
8.8 KB
change default parameter,and add commnet

auxiliary weight (AW) method demonstration code

This is demonstration code for AW layer. Please see my paper for more detail.(proceedings of JSAI2018 )(非公式日本語訳)

The  pepar have lack of refference.
[AOYAMA 91] AOYAMA, T. and ICHIKAWA, H.: Obtaining the Correlation Indices between Drug Activity and Structural Parameters Using a Neural Network, Chem.Pharm. Bull., Vol. 39, No. 2, pp. 372{378 (1991)
[Guan 09] Guan, W., Zhou, M., Hampton, C. Y., Benigno, B. B., Walker, L. D., Gray, A., McDonald, J. F.,and Fernandez, F. M.: Ovarian cancer detection frommetabolomic liquid chromatography/mass spectrometrydata by support vector machines, BMC Bioinformatics, Vol. 10, No. 1, p. 259 (2009)

This demo script extracts feature, from 1000 dimensional data (2700 of learning data set). Which only 12 dimensions are contributing to make target label.

data=np.random.randn(2700,1000) #inputData
y=0  #Label Data y<0 means Positive
y+=(data[:,20]+0.5)*(data[:,30]-0.4)* (data[:,40]+0.3) 
y+=(data[:,50]+0.25)*(data[:,60]-0.15)* (data[:,70]+0.05) 
y+=(data[:,80]+0.3)*(data[:,90]-0.2)* (data[:,100]+0.1) 
y+=(data[:,110]-0.2)*(data[:,120]+0.1)* (data[:,130]-0.0)

feature extraction

Setup

Install Chainer 2.0 or later (I can run it on 3.3 on Anaconda3) https://chainer.org/

Prepare simulation data

python makeSimData.py

sim.npz is created

Single Run

python awLasso.py ARG GPU_ID OUTPUT_SUFIX_NUMBER[0,intmax)

ARG is like this Aw=T_Decay=0_Lasso=1e-5_Lasso1=1e-3_P=0_awLimit=1e+10_nLayer=6_nMed=300_nMed1=1000 0 1

  • AW= T or F : Enable AW layer
  • Decay=[0,1) :Decay rate (L2 reguralization)
  • Lasso=[0,1) :Lasso for all of neuralnet parameter
  • Lasso1=[0,1) :Lasso for AW or Additional Lasso for 1st layer of middle layers.
  • awLimit=[0,floatmax) : Upper limit of aw weight (useful for wide feature extraction)
  • nLayer:[1,about 8) : Number of middle layers.
  • nMed: Order of middle Layers (except of 1st layer)
  • nMed1: Order of 1st layer of middle layers

Batch Run

To make graph of the paper. run flowing commands.Acctual result is depends on random value.(You can not get the very same)

python makeRunScript.py #(make run.sh)
chmod 755 *.sh
sh run.sh  #(Take long time)
python dumpLog.py #pre process for log data
python makeFig2.py # show result of experiment in  paper (The results depends on random value,very little bit)

Apply to your network

AwLayer.py is almost all of AW method implementation . awLasso.py is main program. I recommend to modify awLasso.py to use aw method.

Hints:

  • Class "Aw" is Aw layer
  • Class "Aw_hook" is Aw weight updater.AW is require another update loop.
  • TrainToProp is Property for Aw_hook.It extracts property from trainer object. Trainer 's children object have the network (Chain) object.AW needs partial difference of the network.(I know it's dirty way, but easy way to add another update routine)
  • zeroOneGV or targetToGv makes backward() parameter. In case of classification such as softmax. There are plural outputs.So you have to select or mix the output for calculating partial difference ∂f(x)/∂x. gv is the select (or mix) coefficient. In case of 2 class classification, constant value (0,1) or (-1,1) is enough.

Licence

Copyrihgt Shimadzu Corp. This demonstration code can only be used for research purposes (Includes academic education). Commercial use is prohibited. Distribution of derivative software for your research is permitted if the distribution is limited to research use. Other redistribution is prohibited.

If you want to use this demonstration code for other purposes, please contact me at a-noda@shimadzu.co.jp .