- edited description
default Gaussian descriptor values
http://amp.readthedocs.io/en/latest/gaussian.html says that the Gaussian feature vectors default to those used in doi.org/10.1021/nl5005674, but it doesn't look like it to me, e.g., Table S1 shows 7 G2 functions were used and 12 G4 functions, whereas Amp defaults to 4 and 4. Perhaps a different paper meant to be cited? Or maybe the Amp defaults were found to be better than those in the paper and the docs just need to be updated?
Comments (9)
-
reporter -
reporter While I was building the save_to_prophet() function, I noticed that if eta was set too high in the symmetry functions, the Amp and PROPhet codes wouldn't give the same results, which I found was due to floating point precision errors. The default values of eta for the G2 function in Amp are really high (0.05, 4, 20, and 80 for G2, whereas they were around 0.04, 0.4, 0.85, 1.5, 2.5, 4.2, and 8.5 in doi.org/10.1021/nl5005674). e^(-80)=1.8E-35, which after scaling can easily break single precision accuracy. I found that limiting eta to less than 10 gave exactly the same results in the two codes.
I'd recommend lowering the default values to something like 0.05, 0.5, 1, and 5. I'd be happy to put in the pull request if you agree this should be done.
-
repo owner - changed status to resolved
Fixes issue
#202.→ <<cset 19f95afa3ba4>>
-
Merge branch 'master' of https://bitbucket.org/andrewpeterson/amp
- 'master' of https://bitbucket.org/andrewpeterson/amp:
Fixes issue
#202.
→ <<cset c9c34f083898>>
- 'master' of https://bitbucket.org/andrewpeterson/amp:
Fixes issue
-
-
Merge branch 'master' of https://bitbucket.org/andrewpeterson/amp
- 'master' of https://bitbucket.org/andrewpeterson/amp:
Small cleanup.
Fixes issue
#202. Sped up analysis plotting io as a module name conflicted. PEP8 cleanup. Move save_to_prophet to amp/io. Allowed save_to_prophet function to use Amp objects with symmetry functions ordered differently than specified in the 'elements' descriptor Split up notimplementederrors in save_to_prophet function Extended save_to_prophet to allow for deeper neural nets Implemented unit conversions for different LAMMPS styles in the save_to_prophet function Fixed style (mainly the line's character limit) of save_to_prophet function Put in warning if converting from Amp to PROPhet with too large an eta Simplified indexing in save_to_prophet function Added function to convert Amp object into PROPhet input file fingerprintprimes was moved from the lossprime block to p.force_coefficient. This commit fixesAttributeError: 'Gaussian' object has no attribute 'fingerprintprimes'
when using neural network with pure python. The fingerprintprimes variable is now moved inside the block when lossprime is True.
→ <<cset c998b885d833>>
- 'master' of https://bitbucket.org/andrewpeterson/amp:
Small cleanup.
Fixes issue
-
Merge branch 'master' of https://bitbucket.org/andrewpeterson/amp
- 'master' of https://bitbucket.org/andrewpeterson/amp:
Fixes issue
#202.
→ <<cset c9c34f083898>>
- 'master' of https://bitbucket.org/andrewpeterson/amp:
Fixes issue
-
-
Merge branch 'master' of https://bitbucket.org/andrewpeterson/amp
- 'master' of https://bitbucket.org/andrewpeterson/amp:
Small cleanup.
Fixes issue
#202. Sped up analysis plotting io as a module name conflicted. PEP8 cleanup. Move save_to_prophet to amp/io. Allowed save_to_prophet function to use Amp objects with symmetry functions ordered differently than specified in the 'elements' descriptor Split up notimplementederrors in save_to_prophet function Extended save_to_prophet to allow for deeper neural nets Implemented unit conversions for different LAMMPS styles in the save_to_prophet function Fixed style (mainly the line's character limit) of save_to_prophet function Put in warning if converting from Amp to PROPhet with too large an eta Simplified indexing in save_to_prophet function Added function to convert Amp object into PROPhet input file fingerprintprimes was moved from the lossprime block to p.force_coefficient. This commit fixesAttributeError: 'Gaussian' object has no attribute 'fingerprintprimes'
when using neural network with pure python. The fingerprintprimes variable is now moved inside the block when lossprime is True.
→ <<cset c998b885d833>>
- 'master' of https://bitbucket.org/andrewpeterson/amp:
Small cleanup.
Fixes issue
- Log in to comment