'Tensor error using DiffAugment for data augmentation in my own dataset . data efficient gans

I'm trying to create synthetic data from pics within a folder called Bathroom using

Running the command they have, and without having errors of locations/etc:

!python3 run_low_shot.py --dataset="/content/drive/My Drive/2-Estudios/viu-master_ai/tfm-deep_vision/input/common_misclassifications/Bathroom/" --resolution=64

Appears the following error:

Loading images from "/content/drive/My Drive/2-Estudios/viu-master_ai/tfm-deep_vision/input/common_misclassifications/Bathroom/"
Creating dataset "/content/drive/My Drive/2-Estudios/viu-master_ai/tfm-deep_vision/input/common_misclassifications/Bathroom/"
Added 81 images.
Local submit - run_dir: results/00000-DiffAugment-stylegan2--64-batch16-1gpu-color-translation-cutout
dnnlib: Running training.training_loop.training_loop() on localhost...
Streaming data using training.dataset.TFRecordDataset...
Dataset shape = [3, 64, 64]
Dynamic range = [0, 255]
Label size    = 0
Constructing networks...
Setting up TensorFlow plugin "fused_bias_act.cu": Preprocessing... Compiling... Loading... Done.
Setting up TensorFlow plugin "upfirdn_2d.cu": Preprocessing... Compiling... Loading... Done.

G                           Params    OutputShape       WeightShape     
---                         ---       ---               ---             
latents_in                  -         (?, 512)          -               
labels_in                   -         (?,)              -               
lod                         -         ()                -               
dlatent_avg                 -         (512,)            -               
G_mapping/latents_in        -         (?, 512)          -               
G_mapping/labels_in         -         (?,)              -               
G_mapping/Normalize         -         (?, 512)          -               
G_mapping/Dense0            262656    (?, 512)          (512, 512)      
G_mapping/Dense1            262656    (?, 512)          (512, 512)      
G_mapping/Dense2            262656    (?, 512)          (512, 512)      
G_mapping/Dense3            262656    (?, 512)          (512, 512)      
G_mapping/Dense4            262656    (?, 512)          (512, 512)      
G_mapping/Dense5            262656    (?, 512)          (512, 512)      
G_mapping/Dense6            262656    (?, 512)          (512, 512)      
G_mapping/Dense7            262656    (?, 512)          (512, 512)      
G_mapping/Broadcast         -         (?, 10, 512)      -               
G_mapping/dlatents_out      -         (?, 10, 512)      -               
G_synthesis/dlatents_in     -         (?, 10, 512)      -               
G_synthesis/4x4/Const       8192      (?, 512, 4, 4)    (1, 512, 4, 4)  
G_synthesis/4x4/Conv        2622465   (?, 512, 4, 4)    (3, 3, 512, 512)
G_synthesis/4x4/ToRGB       264195    (?, 3, 4, 4)      (1, 1, 512, 3)  
G_synthesis/8x8/Conv0_up    2622465   (?, 512, 8, 8)    (3, 3, 512, 512)
G_synthesis/8x8/Conv1       2622465   (?, 512, 8, 8)    (3, 3, 512, 512)
G_synthesis/8x8/Upsample    -         (?, 3, 8, 8)      -               
G_synthesis/8x8/ToRGB       264195    (?, 3, 8, 8)      (1, 1, 512, 3)  
G_synthesis/16x16/Conv0_up  2622465   (?, 512, 16, 16)  (3, 3, 512, 512)
G_synthesis/16x16/Conv1     2622465   (?, 512, 16, 16)  (3, 3, 512, 512)
G_synthesis/16x16/Upsample  -         (?, 3, 16, 16)    -               
G_synthesis/16x16/ToRGB     264195    (?, 3, 16, 16)    (1, 1, 512, 3)  
G_synthesis/32x32/Conv0_up  2622465   (?, 512, 32, 32)  (3, 3, 512, 512)
G_synthesis/32x32/Conv1     2622465   (?, 512, 32, 32)  (3, 3, 512, 512)
G_synthesis/32x32/Upsample  -         (?, 3, 32, 32)    -               
G_synthesis/32x32/ToRGB     264195    (?, 3, 32, 32)    (1, 1, 512, 3)  
G_synthesis/64x64/Conv0_up  2622465   (?, 512, 64, 64)  (3, 3, 512, 512)
G_synthesis/64x64/Conv1     2622465   (?, 512, 64, 64)  (3, 3, 512, 512)
G_synthesis/64x64/Upsample  -         (?, 3, 64, 64)    -               
G_synthesis/64x64/ToRGB     264195    (?, 3, 64, 64)    (1, 1, 512, 3)  
G_synthesis/images_out      -         (?, 3, 64, 64)    -               
G_synthesis/noise0          -         (1, 1, 4, 4)      -               
G_synthesis/noise1          -         (1, 1, 8, 8)      -               
G_synthesis/noise2          -         (1, 1, 8, 8)      -               
G_synthesis/noise3          -         (1, 1, 16, 16)    -               
G_synthesis/noise4          -         (1, 1, 16, 16)    -               
G_synthesis/noise5          -         (1, 1, 32, 32)    -               
G_synthesis/noise6          -         (1, 1, 32, 32)    -               
G_synthesis/noise7          -         (1, 1, 64, 64)    -               
G_synthesis/noise8          -         (1, 1, 64, 64)    -               
images_out                  -         (?, 3, 64, 64)    -               
---                         ---       ---               ---             
Total                       27032600                                    


D                    Params    OutputShape       WeightShape     
---                  ---       ---               ---             
images_in            -         (?, 3, 64, 64)    -               
Pad                  -         (?, 3, 64, 64)    -               
64x64/FromRGB        2048      (?, 512, 64, 64)  (1, 1, 3, 512)  
64x64/Conv0          2359808   (?, 512, 64, 64)  (3, 3, 512, 512)
64x64/Conv1_down     2359808   (?, 512, 32, 32)  (3, 3, 512, 512)
64x64/Skip           262144    (?, 512, 32, 32)  (1, 1, 512, 512)
32x32/Conv0          2359808   (?, 512, 32, 32)  (3, 3, 512, 512)
32x32/Conv1_down     2359808   (?, 512, 16, 16)  (3, 3, 512, 512)
32x32/Skip           262144    (?, 512, 16, 16)  (1, 1, 512, 512)
16x16/Conv0          2359808   (?, 512, 16, 16)  (3, 3, 512, 512)
16x16/Conv1_down     2359808   (?, 512, 8, 8)    (3, 3, 512, 512)
16x16/Skip           262144    (?, 512, 8, 8)    (1, 1, 512, 512)
8x8/Conv0            2359808   (?, 512, 8, 8)    (3, 3, 512, 512)
8x8/Conv1_down       2359808   (?, 512, 4, 4)    (3, 3, 512, 512)
8x8/Skip             262144    (?, 512, 4, 4)    (1, 1, 512, 512)
4x4/MinibatchStddev  -         (?, 513, 4, 4)    -               
4x4/Conv             2364416   (?, 512, 4, 4)    (3, 3, 513, 512)
4x4/Dense0           4194816   (?, 512)          (8192, 512)     
Output               513       (?,)              (512, 1)        
scores_out           -         (?,)              -               
---                  ---       ---               ---             
Total                26488833                                    

Building TensorFlow graph...
Traceback (most recent call last):
  File "run_low_shot.py", line 171, in <module>
    main()
  File "run_low_shot.py", line 165, in main
    run(**vars(args))
  File "run_low_shot.py", line 94, in run
    dnnlib.submit_run(**kwargs)
  File "/content/data-efficient-gans/DiffAugment-stylegan2/dnnlib/submission/submit.py", line 343, in submit_run
    return farm.submit(submit_config, host_run_dir)
  File "/content/data-efficient-gans/DiffAugment-stylegan2/dnnlib/submission/internal/local.py", line 22, in submit
    return run_wrapper(submit_config)
  File "/content/data-efficient-gans/DiffAugment-stylegan2/dnnlib/submission/submit.py", line 280, in run_wrapper
    run_func_obj(**submit_config.run_func_kwargs)
  File "/content/data-efficient-gans/DiffAugment-stylegan2/training/training_loop.py", line 217, in training_loop
    G_loss, D_loss, D_reg = dnnlib.util.call_func_by_name(G=G_gpu, D=D_gpu, training_set=training_set, minibatch_size=minibatch_gpu_in, reals=reals_read, real_labels=labels_read, **loss_args)
  File "/content/data-efficient-gans/DiffAugment-stylegan2/dnnlib/util.py", line 256, in call_func_by_name
    return func_obj(*args, **kwargs)
  File "/content/data-efficient-gans/DiffAugment-stylegan2/training/loss.py", line 16, in ns_DiffAugment_r1
    labels = training_set.get_random_labels_tf(minibatch_size)
  File "/content/data-efficient-gans/DiffAugment-stylegan2/training/dataset.py", line 193, in get_random_labels_tf
    return tf.zeros([minibatch_size], dtype=tf.int32)
  File "/usr/local/lib/python3.7/dist-packages/tensorflow_core/python/ops/array_ops.py", line 2338, in zeros
    output = _constant_if_small(zero, shape, dtype, name)
  File "/usr/local/lib/python3.7/dist-packages/tensorflow_core/python/ops/array_ops.py", line 2295, in _constant_if_small
    if np.prod(shape) < 1000:
  File "<__array_function__ internals>", line 6, in prod
  File "/usr/local/lib/python3.7/dist-packages/numpy/core/fromnumeric.py", line 3052, in prod
    keepdims=keepdims, initial=initial, where=where)
  File "/usr/local/lib/python3.7/dist-packages/numpy/core/fromnumeric.py", line 86, in _wrapreduction
    return ufunc.reduce(obj, axis, dtype, out, **passkwargs)
  File "/usr/local/lib/python3.7/dist-packages/tensorflow_core/python/framework/ops.py", line 736, in __array__
    " array.".format(self.name))
NotImplementedError: Cannot convert a symbolic Tensor (Inputs/minibatch_gpu_in:0) to a numpy array.

The pictures in the folder bathroom are all .jpg, and regarding the resolution I choose in the code above, the result is the same.

By the way, I don't have really clear how to specify the output volume of pics, for my own dataset.

Anyone else working with their own dataset for that repo? Thanks



Solution 1:[1]

I was trying to use this repo => https://github.com/mit-han-lab/data-efficient-gans

At the end I got it running in Colab, with the pytorch version:

  • Having installed torch v1.10 cuda 11.1
  • Creating a zip with pics where their resolution were 256x256 (like the resolution of the obama dataset they mention)
  • Following their pytorch instructions, then:
  • !python train.py --outdir=training-runs --data="</your_path_for_256x256pics.zip>" --gpus=1

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1 albertovpd