- edited description
Training parameters with 1 hiddenlayer cannot be used as calculator
Let's suppose one trains an artificial neural network with a topology of 20 nodes in just 1 hidden layer. The model was set as follows:
model=NeuralNetwork(hiddenlayers=(n), fortran=True)
After optimization, if one tries to use the .amp
file, the following error raises:
Traceback (most recent call last):
File "HeCu.py", line 42, in <module>
slab.get_potential_energy()
File "/users/melkhati/git/ase/ase/atoms.py", line 682, in get_potential_energy
energy = self._calc.get_potential_energy(self)
File "/users/melkhati/git/ase/ase/calculators/calculator.py", line 422, in get_potential_energy
energy = self.get_property('energy', atoms)
File "/users/melkhati/git/ase/ase/calculators/calculator.py", line 461, in get_property
self.calculate(atoms, [name], system_changes)
File "/users/melkhati/git/amp/amp/__init__.py", line 259, in calculate
self.descriptor.fingerprints[key])
File "/users/melkhati/git/amp/amp/model/__init__.py", line 66, in calculate_energy
symbol=symbol)
File "/users/melkhati/git/amp/amp/model/neuralnetwork.py", line 363, in calculate_atomic_energy
outputs = calculate_nodal_outputs(self.parameters, afp, symbol,)
File "/users/melkhati/git/amp/amp/model/neuralnetwork.py", line 634, in calculate_nodal_outputs
for hiddenlayer in hiddenlayers[1:]:
TypeError: 'int' object has no attribute '__getitem__'
The issue is related to the expected data type of hiddenlayers
being a tuple
, but setting one hidden layer returns an int
. I solved the problem with this patch:
diff --git a/amp/model/neuralnetwork.py b/amp/model/neuralnetwork.py
index 9e7fbc5..29b258e 100644
--- a/amp/model/neuralnetwork.py
+++ b/amp/model/neuralnetwork.py
@@ -631,6 +631,9 @@ def calculate_nodal_outputs(parameters, afp, symbol,):
temp[0, _] = o[1][0, _]
temp[0, np.shape(o[1])[1]] = 1.0
ohat[1] = temp
+ if isinstance(hiddenlayers, int):
+ hiddenlayers = (hiddenlayers,)
+
for hiddenlayer in hiddenlayers[1:]:
layer += 1
net[layer] = np.dot(ohat[layer - 1], weight[layer])
@@ -703,6 +706,9 @@ def calculate_dOutputs_dInputs(parameters, derafp, outputs, nsymbol,):
dOutputs_dInputs = {} # node values
dOutputs_dInputs[0] = _derafp
layer = 0 # input layer
+ if isinstance(hiddenlayers, int):
+ hiddenlayers = (hiddenlayers,)
+
for hiddenlayer in hiddenlayers[0:]:
layer += 1
temp = np.dot(np.matrix(dOutputs_dInputs[layer - 1]),
Another way to avoid this (not tested) would be to set the topology of the neural network like this
model=NeuralNetwork(hiddenlayers=(n,), fortran=True)
However, it seems more natural to set hiddenlayers=(n)
.
Is it worth to apply that patch into main
?.
Comments (10)
-
reporter -
repo owner I think that is not a bug since (20) is not a tuple but (20,) is...
-
reporter - marked as enhancement
- marked as minor
-
reporter You are right, this is not a bug. It may be more a usability issue because, from a user's point of view, it might be more natural to set the case of just one hidden layer with
n
nodes ashiddenlayer=(n)
unless they know about one-element tuples. I always found that one element-tuples in Python are confusing (I will use lists in my example):>>> a = [1] >>> type(a) <type 'list'> >>> b = (1) >>> type(b) <type 'int'> >>>
Enclosing one element inside
[ ]
creates a list. But as Python uses( )
for grouping, actually commas inside( )
are those creating the tuple:>>> c = (1,) >>> type(c) <type 'tuple'>
Looking at neuralnetwork.py L17 the description of the parameter is only
dict
. I think we should add tuple and dictionary to give more hints to users.I have updated the severity and type of report.
-
repo owner Actually, it's the comma that makes the tuple in python.
>>> c = 1, >>> type(c) <type 'tuple'>
But please update the documentation so it makes sense...
-
reporter - changed status to closed
Updated docstring for hiddenlayer dictionary in neuralnetwork.py. Closes
#136.→ <<cset fb84510eccd6>>
-
reporter Merge branch 'master' into issue66
- master:
Documentation example script correction.
Updated docstring for hiddenlayer dictionary in neuralnetwork.py. Closes
#136. This commit fixes the following error when using tflow module together with newer numpy versions:
→ <<cset 90e70358bf86>>
- master:
Documentation example script correction.
Updated docstring for hiddenlayer dictionary in neuralnetwork.py. Closes
-
repo owner Merge branch 'master' into issue66
- master:
Documentation example script correction.
Updated docstring for hiddenlayer dictionary in neuralnetwork.py. Closes
#136. This commit fixes the following error when using tflow module together with newer numpy versions:
→ <<cset 90e70358bf86>>
- master:
Documentation example script correction.
Updated docstring for hiddenlayer dictionary in neuralnetwork.py. Closes
-
reporter Merge branch 'master' into symmetryfunctions
- master:
Updated docstring for hiddenlayer dictionary in neuralnetwork.py. Closes
#136.
→ <<cset 7387a988f176>>
- master:
Updated docstring for hiddenlayer dictionary in neuralnetwork.py. Closes
-
I had the same problem. I would agree with Muammar that it is more user friendly to catch the mistake and correct it.
- Log in to comment