'Tensorboard Error: No dashboards are active for current data set

I am trying to use Tensorboard but every time I run any program with Tensorflow, I get an error when I go to localhost:6006 to view the Visualization

Here is my code

a = tf.add(1, 2,)
b = tf.multiply(a, 3)

with tf.Session() as sess:
    writer = tf.summary.FileWriter("output", sess.graph)
    print(sess.run(b))
    writer.close()

When I go to the command prompt and enter

tensorboard --logdir=C:\path\to\output\folder

It returns with

TensorBoard 0.1.8 at http://MYCOMP:6006 (Press CTRL+C to quit)

When I go to localhost:6006 it states

No dashboards are active for the current data set. Probable causes: - You haven’t written any data to your event files. - TensorBoard can’t find your event files.

I have looked at this link (Tensorboard: No dashboards are active for the current data set) but it doesn't seem to fix this issue

And I am running this on Windows 10

What do I do to fix this issue? Am I giving the right path for Tensorboard in the command prompt?

Thank you in advance



Solution 1:[1]

Your issue may be related to the drive you are attempting to start tensorboard from and the drive your logdir is on. Tensorboard uses a colon to separate the optional run name and the path in the logdir flag, so your path is being interpreted as \path\to\output\folder with name C.

This can be worked around by either starting tensorboard from the same drive as your log directory or by providing an explicit run name, e.g. logdir=mylogs:C:\path\to\output\folder

See here for reference to the issue.

Solution 2:[2]

In case of Windows,I got a workaround.

cd /path/to/log

tensorboard --logdir=./

Here you can use path as normal. Keep in mind not to give spaces with it as logdir = ./.

This gave me an error:

No dashboards are active for the current data set. Probable causes: - You haven’t written any data to your event files. - TensorBoard can’t find your event files.

Solution 3:[3]

In Windows 10, this command works

tensorboard --logdir=training/

Here training is the directory where output files are written. Please note it does not have any quotes and has a slash (/) at the end. Both are important.

Solution 4:[4]

Try this instead:

tensorboard --logdir="C:\path\to\output\folder"

Solution 5:[5]

Well, you have several issues with your code.

  1. You are creating a summary writer (tf.summary.FileWriter) but you don't actually write anything with it. print(sess.run(b)) has nothing to do with tensorboard if you expected this to has some effect on it. It just prints the value of b
  2. You don't create and summary object to connect some value with.
  3. You are probably entering wrong folder for tensorboard.

More analytically:

  1. You need a summary object to write a summary. For example a tf.summary.scalar to write a scalar to a summary. Something like tf.summary.scalar("b_value", b) to write the value of b to a summary.
  2. Then you actually need to run your summary operation into a session to get it working, like: summary = sess.run(summary_scalar).
  3. Write the value with the writer you defined previously: writer.add_summary(summary).
  4. Now there is something to see in tensorboard, using tensorboard --logdir=output in a terminal
  5. In general use you will probably need tf.summary.merge_all() to pass to run in order to gather all your summaries together.

Hope this helps.

Solution 6:[6]

Find the path to main.py inside tensorboard directory and copy it. It should be something like this:

C:/Users/<Your Username>/Anaconda3/envs/tensorflow/Lib/site-packages/tensorboard/main.py

or

C:/Users/<Your Username>/anaconda/envs/tf/lib/python3.5/site-packages/tensorboard/main.py

Once you know the correct path, Run this command in Anaconda Prompt using the path to main.py inside tensorboard directory. This worked for me in Windows.

python C:/Users/Username/Anaconda3/envs/tensorflow/Lib/site-packages/tensorboard/main.py --logdir=foo:<path to your log directory>

Credits: KyungHoon Kim

Solution 7:[7]

Try placing the directory inside the quotes.

Example:

tensorboard --logdir="C:/Users/admin/Desktop/ML/log"

Solution 8:[8]

In case anyone still meets this problem I suggest trying different port like tensorboard --logdir=logs --port 5000. Worked for me.

Solution 9:[9]

When I ran the TensorFlow (https://www.tensorflow.org/programmers_guide/tensorboard_histograms) tutorial, I ran across the same issue. I went ahead and tried the solution referenced by hpabst above. It worked like champ. In terminal (I am running in CentOS)- I ran: tensorboard --log =mydir: '~/mlDemo/'

Solution 10:[10]

I am working with windows 10 as well. I tried your code with running tensorboard from same drive, different drive, and local path. In all three cases, I was able to see the graph.

One solution is, maybe you need to change your host (I cannot visualize with localhost:6006 as well). Try http://MYCOMP:6006 to check if you see any difference.

Note: My tensorboard version is 1.8.0 (maybe you can update your tensorboard to see it gives any difference)

Solution 11:[11]

Observed that once tensorflow goes into a bad state, it throws problem everytime after because in subsequent runs,

  1. It does not kill previous processes automatically
  2. It uses previous states while starting the dashboard

Steps to mitigate the bad state:

  1. Kill all running tensorboard processes.
  2. Clear previous tensorboard state.

In jupyter notebook

! powershell "echo 'checking for existing tensorboard processes'"
! powershell "ps | Where-Object {$_.ProcessName -eq 'tensorboard'}"

! powershell "ps | Where-Object {$_.ProcessName -eq 'tensorboard'}| %{kill $_}"

! powershell "echo 'cleaning tensorboard temp dir'"
! powershell "rm $env:TEMP\.tensorboard-info\*"

! powershell "ps | Where-Object {$_.ProcessName -eq 'tensorboard'}"


%tensorboard --logdir="logs\fit" --host localhost

If it times out in jupyter, then go to http://localhost:6006/#scalars in the browser and check

Solution 12:[12]

Well, I have tried almost every solution here but nothing has worked out. Finally, after several weeks, I was able to fix this problem. Here is what I did.

  1. Kill the existing processes of tensorboard. Use this code
  2. Shut down all Jupiter notebook instances.
  3. Deleted Tensorboard cache files from my local - C:\Users\sethuri\AppData\Local\Temp enter image description here
  4. Launched Jupyter notebook instance and invoked the tensorboard
  5. I didn't see the visual first but after reloading the page it worked

Solution 13:[13]

You need to close the port if it is still open from previous runs. I wrote a function for closing the port. Here is an example:

import tensorflow as tf
import datetime
import os
import webbrowser
import subprocess
import pandas as pd
import io
import re


path = os.getcwd()
port = 6006

def close_port(port):
open_ports = subprocess.getoutput(f"netstat -ano | findstr :{port}") #os.system(f'netstat -ano | findstr :{port}')
open_ports = re.sub("\s+", " ", open_ports)
open_ports = open_ports.lstrip()
open_ports = open_ports.replace(" TCP", "\nTCP")
open_ports_io = io.StringIO(open_ports)

if len(open_ports) > 0:
    open_ports = pd.read_csv(open_ports_io, sep=' ', header=None)
    for i in range(len(open_ports)):
        if open_ports.loc[i, 3] == "LISTENING":
            res = subprocess.getoutput(f"taskkill /pid {open_ports.loc[i, 4]} /F")
            print(res)

 
#load a dataset and define a model for training
mnist = tf.keras.datasets.mnist

(x_train, y_train),(x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0

def create_model():
    return tf.keras.models.Sequential([
           tf.keras.layers.Flatten(input_shape=(28, 28)),
           tf.keras.layers.Dense(512, activation='relu'),
           tf.keras.layers.Dropout(0.2),
           tf.keras.layers.Dense(10, activation='softmax')
           ])


model = create_model()
model.compile(optimizer='adam',
          loss='sparse_categorical_crossentropy',
          metrics=['accuracy'])

log_dir = "./logs/fit/" + datetime.datetime.now().strftime("%Y%m%d-%H%M%S")
tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir=log_dir, histogram_freq=1)

model.fit(x=x_train, 
         y=y_train, 
         epochs=3, 
         validation_data=(x_test, y_test), 
         callbacks=[tensorboard_callback])


close_port(port)
webbrowser.open(f'http://localhost:{port}/')
os.system(f'tensorboard --logdir={path}/logs/fit --port {port}')