Keras concatenate in loop gives graph disconnected error

596
June 27, 2017, at 04:03 AM

I am trying to implement an algorithm that requires looping over classes in keras, and am having difficulty understanding some of the behavior.

The script below runs without error:

from keras.layers import Input, Dense, Lambda, Layer
from keras.models import Model
from keras import backend as K
from theano import tensor as T
import numpy as np
batch_size = 100
original_dim = 15
n_classes = 2
inp = Input(shape=(original_dim,), name='input')
output_layer = Dense(10)
(y_vals,
 inp_and_y,
 output) = [[None] * n_classes for _ in range(3)]
for k in range(n_classes):
    y_npy = np.eye(n_classes)[k].reshape(1, -1)
    y_vals[k] = Input(tensor=K.repeat_elements(K.variable(y_npy), 
                                                          batch_size, 
                                                          axis=0),
                                 name='y_vector_{}'.format(k+1))
    inp_and_y[k] = Lambda(lambda x: K.concatenate([x, y_vals[k]], axis=-1),
                              name='input_and_y_{}'.format(k+1),
                              output_shape=(original_dim + n_classes,))(inp)
    #inp_and_y[k] = Lambda(lambda args: K.concatenate(args, axis=-1),
    #                      name='input_and_y_{}'.format(k+1),
    #                          output_shape=(original_dim + n_classes,))([inp, y_vals[k]])
    #inp_and_y[k] = concatenate([inp, y_vals[k]])
    output[k] = output_layer(inp_and_y[k])
model = Model(inputs=inp, outputs=output)

However, if I un-comment the second input_and_y[k] definition, it gives the error

RuntimeError: Graph disconnected: cannot obtain value for tensor Reshape{2}.0 at layer "y_vector_2". The following previous layers were accessed without issue: ['input']

I get the same error if I use the layers.concatenate function, although that requires me to give a batch_shape to the inp layer rather than just shape. Can anyone explain why the second call doesn't know how to connect the layers, but the first one does? It seems like variables are getting overwritten in the loop but it also occurs with n_classes = 1 where the loop only gets run once.

It also seems similar to this github issue, but I have updated to the latest keras (2.0.5) and theano (0.10.0dev1)

READ ALSO
Is there a chunksize argument for read_excel in pandas?

Is there a chunksize argument for read_excel in pandas?

I'm trying to create a progress bar for reading excel data into pandas using tqdmI can do this easily with a csv using the chunksize argument like so:

559
How do I get all third party cookies with Scrapy or other easier methods?

How do I get all third party cookies with Scrapy or other easier methods?

I wish to use Python Scrapy to get all cookies on a domain, including third party cookies such as embedded videos or Google Calender

288
How to restore trained LinearClassifier from tensorflow high level API and make predictions

How to restore trained LinearClassifier from tensorflow high level API and make predictions

I have trained a logistic regression model model using tensorflow's LinearClassifier() class, and set the model_dir parameter, which specifies the location where to save metagrahps of checkpoints during model training:

310