Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TypeError: Cannot interpret feed_dict key as Tensor: Tensor Tensor("lstm_1_input:0", shape=(?, ?, 4), dtype=float32) is not an element of this graph. #1

Closed
lefnire opened this issue Aug 10, 2017 · 19 comments

Comments

@lefnire
Copy link

lefnire commented Aug 10, 2017

Fresh clone, data/bitcoin.csv unzipped, Keras(2.0.6) TensorFlow(1.2.1) Python(3.6.2) (full pip freeze).

[Edit] Also tried on Python 2.7, same error. (full pip freeze)

Full error:

(btc3) lefnire@lefnire-ubuntu:~/Sites/btc/github/Multidimensional-LSTM-BitCoin-Time-Series$ python run.py 
Using TensorFlow backend.
> Creating x & y data files...
> Clean datasets created in file `data/clean_data.h5.h5`
> Generating clean data from: data/clean_data.h5 with batch_size: 100
> Clean data has 180610 data rows. Training on 144488 rows with 722 steps-per-epoch
> Compilation Time :  0.010142087936401367
> Testing model on 36122 data rows with 361 steps
2017-08-09 17:15:15.447882: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
2017-08-09 17:15:15.447905: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
2017-08-09 17:15:15.447909: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
2017-08-09 17:15:15.447913: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations.
2017-08-09 17:15:15.447917: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations.
2017-08-09 17:15:15.563391: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:893] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2017-08-09 17:15:15.563705: I tensorflow/core/common_runtime/gpu/gpu_device.cc:940] Found device 0 with properties: 
name: GeForce GTX 1080 Ti
major: 6 minor: 1 memoryClockRate (GHz) 1.582
pciBusID 0000:01:00.0
Total memory: 10.90GiB
Free memory: 10.02GiB
2017-08-09 17:15:15.563716: I tensorflow/core/common_runtime/gpu/gpu_device.cc:961] DMA: 0 
2017-08-09 17:15:15.563719: I tensorflow/core/common_runtime/gpu/gpu_device.cc:971] 0:   Y 
2017-08-09 17:15:15.563724: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1030] Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX 1080 Ti, pci bus id: 0000:01:00.0)
> Compilation Time :  0.009964227676391602
Epoch 1/2
Exception in thread Thread-1:
Traceback (most recent call last):
  File "/home/lefnire/anaconda3/envs/btc3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 942, in _run
    allow_operation=False)
  File "/home/lefnire/anaconda3/envs/btc3/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 2584, in as_graph_element
    return self._as_graph_element_locked(obj, allow_tensor, allow_operation)
  File "/home/lefnire/anaconda3/envs/btc3/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 2663, in _as_graph_element_locked
    raise ValueError("Tensor %s is not an element of this graph." % obj)
ValueError: Tensor Tensor("lstm_1_input:0", shape=(?, ?, 4), dtype=float32) is not an element of this graph.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/lefnire/anaconda3/envs/btc3/lib/python3.6/threading.py", line 916, in _bootstrap_inner
    self.run()
  File "/home/lefnire/anaconda3/envs/btc3/lib/python3.6/threading.py", line 864, in run
    self._target(*self._args, **self._kwargs)
  File "run.py", line 64, in fit_model_threaded
    epochs=configs['model']['epochs']
  File "/home/lefnire/anaconda3/envs/btc3/lib/python3.6/site-packages/keras/legacy/interfaces.py", line 87, in wrapper
    return func(*args, **kwargs)
  File "/home/lefnire/anaconda3/envs/btc3/lib/python3.6/site-packages/keras/models.py", line 1117, in fit_generator
    initial_epoch=initial_epoch)
  File "/home/lefnire/anaconda3/envs/btc3/lib/python3.6/site-packages/keras/legacy/interfaces.py", line 87, in wrapper
    return func(*args, **kwargs)
  File "/home/lefnire/anaconda3/envs/btc3/lib/python3.6/site-packages/keras/engine/training.py", line 1840, in fit_generator
    class_weight=class_weight)
  File "/home/lefnire/anaconda3/envs/btc3/lib/python3.6/site-packages/keras/engine/training.py", line 1565, in train_on_batch
    outputs = self.train_function(ins)
  File "/home/lefnire/anaconda3/envs/btc3/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py", line 2268, in __call__
    **self.session_kwargs)
  File "/home/lefnire/anaconda3/envs/btc3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 789, in run
    run_metadata_ptr)
  File "/home/lefnire/anaconda3/envs/btc3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 945, in _run
    + e.args[0])
TypeError: Cannot interpret feed_dict key as Tensor: Tensor Tensor("lstm_1_input:0", shape=(?, ?, 4), dtype=float32) is not an element of this graph.

I realize you're likely not keen on supporting a blog-post's code-demo, but just in case someone has top-of-the-dome.

@johndpope
Copy link

had a similar problem the other day - try googling
for me I found this fix
2014mchidamb/AdversarialChess#4

or setup an old version of tensorflow 1? and try that with miniconda.
https://gist.github.com/johndpope/187b0dd996d16152ace2f842d43e3990

@lefnire
Copy link
Author

lefnire commented Aug 10, 2017

Tried Tensorflow 1.0.0 (error & pip-freeze), will try per your AdversarialChess comments tomorrow. Thanks!

@lefnire
Copy link
Author

lefnire commented Aug 10, 2017

@johnpope I'm having trouble connecting the fix from your prior issue (AdversarialChess) to this situation conceptually, don't know what I'd change here. Maybe a keras/tf version change could be a quick fix, which versions are you using (per pip freeze?) Also question for @jaungiers

@johndpope
Copy link

sorry - tensorflow changed their syntax at some point and I thought it was connected with this.
there's a bunch of other tensorflow lstm examples that I've cloned. you maybe able to progress things by referencing their code.
screen shot 2017-08-10 at 3 26 23 pm

@lefnire
Copy link
Author

lefnire commented Aug 11, 2017

Looks like an issue with threading - maybe args aren't being passed properly, or a race condition or something. When I remove the threading line and call fit_model_threaded() directly, all's well! (python 3.5, TF 1.2, keras 2.0.6)

# t = threading.Thread(target=fit_model_threaded, args=[model, data_gen_train, steps_per_epoch, configs])
# t.start()
fit_model_threaded(model, data_gen_train, steps_per_epoch, configs)

If I get threading back in business I'll submit a PR

@lefnire lefnire closed this as completed Aug 11, 2017
@brandonnchoii
Copy link

@lefnire were you ever able to get the threading issue resolved? i am running into a similar issue where i get the feed_dict error when i run the training on a separate thread

@lefnire
Copy link
Author

lefnire commented Sep 25, 2017

alas, no. I haven't messed with this repo in a while, sorry!

@brandonnchoii
Copy link

@lefnire i am running all my model training on one thread but can't seem to use that model to predict on my main io loop thread. let me know if you find a solution!

@johndpope
Copy link

not sure this helps - but found some threading code for another python project.

https://github.com/eragonruan/refinenet-image-segmentation/blob/7f8fc11c63ac349c83ddbd232626e4b8361e38fc/utils/input_utils.py

seems like there's a central queue to orchestrate things.

@chenc10
Copy link

chenc10 commented Dec 11, 2017

I happened to have found a solution to that, from avital's answer in keras-team/keras#2397.

Right after loading or constructing your model, save the TensorFlow graph:

graph = tf.get_default_graph()

In the other thread (or perhaps in an asynchronous event handler), do:

global graph
with graph.as_default():
    (... do inference here ...)

Root cause:

The default graph is a property of the current thread. If you create a new thread, and wish to use the default graph in that thread, you must explicitly add a with g.as_default(): in that thread's function. (See - https://www.tensorflow.org/api_docs/python/tf/get_default_graph.)

@nandhakumarm
Copy link

Same happened
Restart the kernel or clear the history or cache
It worked

@mohammedyunus009
Copy link

As far as my knowledge there are still a few bugs with keras , mainly in the load_model() funtion. Today i was succesfully able to solve 5-10 problems by restarting , maybe u should try

@mohammedyunus009
Copy link

mohammedyunus009 commented May 11, 2018

I was facing the same problem with flask , and tensorflow ,but i was able to solve it
just install cython
either
conda install cython or
pip install cython

@mohammedyunus009
Copy link

mohammedyunus009 commented May 11, 2018

and also install
conda install botocore
maybe these bugs are arising AWS production servers

@mohammedyunus009
Copy link

finally completely solved it .
This worked for me

from keras import backend as K
and after predicting my data i inserted this part of code then i had again loaded the model.

K.clear_session()

@johndpope
Copy link

@vikramforsk2019
Copy link

"""
he error message TypeError: Cannot interpret feed_dict key as Tensor: Tensor Tensor("...", dtype=dtype)
is not an element of this graph can also arise
in case you run a session outside of the scope of its with statement. Consider:
"""
#solve this error use it-->

from keras import backend as K
#Before prediction
K.clear_session()

#After prediction
K.clear_session()

@ShubhamOjha
Copy link

finally completely solved it .
This worked for me

from keras import backend as K
and after predicting my data i inserted this part of code then i had again loaded the model.

K.clear_session()

thanks, this solved the issue.

@Sidmaurya
Copy link

Sidmaurya commented Jan 18, 2020

finally solved 👍
from tensorflow.keras import backend as K instead of form keras import

K.clear_session() //before predicting

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

9 participants