'Training custom object detection train_spec and eval_specs[0] not found

I'm trying to train a custom object detector using tensorflow on google colab using this Building your own object detector — PyTorch vs TensorFlow and how to even get started?

I followed all the steps in the blogs

This is the code

from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from absl import flags
import tensorflow as tf
from object_detection import model_hparams
from object_detection import model_lib

model_dir = "/content/drive/MyDrive/Tensorflow/models/research/faster_rcnn_inception_v2_coco_2018_01_28/model_dir"



config = tf.estimator.RunConfig(model_dir=model_dir)
pipeline_config_path= "/content/drive/MyDrive/Tensorflow/models/research/faster_rcnn_inception_v2_coco_2018_01_28/pipeline.config"
num_train_steps=10000


train_and_eval_dict = model_lib.create_estimator_and_inputs(
                        run_config=config,
                        hparams=model_hparams.create_hparams(None),
                        pipeline_config_path = pipeline_config_path,
                        train_steps =num_train_steps,
                        sample_1_of_n_eval_examples = 1)

estimator = train_and_eval_dict['estimator']
train_input_fn = train_and_eval_dict['train_input_fn']
eval_input_fns = train_and_eval_dict['eval_input_fns']
eval_on_train_input_fn = train_and_eval_dict['eval_on_train_input_fn']
predict_input_fn = train_and_eval_dict['predict_input_fn']
train_steps = train_and_eval_dict['train_steps']


tf.estimator.train_and_evaluate(estimator,train_spec,eval_specs[0])

Where is the train_spec and eval_specs[0]?

I get warning saying these two not found

What do I do?

Can anyone help please



Solution 1:[1]

It is the old fashion you can use prediction. Model to estimator

train_spec and eval_spec are nothing than flags for using the same datasets or similarities. For example, you had multiple rows of the inputs, and changing the input also reflecting of the output within similarity scopes.

In shorts, they can do specific training groups or matching tables!

When you think about training on specific scopes that is a good idea for a small set of data but the model required space to learn then training on a sub-dataset is an alternative and we do not use the train_and_evaluate in some tasks.

Table matching is quicks since input does not need to be exactly the same but your can estimates output from the input and its feedback when training and predicts no controls of parameters. [ A, B, C, D ] -> [ A, B, C, E ] how many units of the differentiate?

[ Sample ]:

"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
Functions
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
def train_input_fn():
    pass
def eval_input_fn():
    pass

def input_fn():
    input = tf.random.uniform(shape=[1, 16, 16 * 3], minval=5, maxval=10, dtype=tf.int64)
    label = tf.constant( [0], shape=( 1, 1, 1 )  )
    return input, label
    
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
: Model Initialize
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
model = tf.keras.models.Sequential([
    tf.keras.layers.InputLayer(input_shape=( 16, 16 * 3 )),
    tf.keras.layers.Reshape((n_numpointer, n_roles)),
    tf.keras.layers.Reshape(( 16, 16 * 3 )),
    tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(64, return_sequences=True, return_state=False)),
    tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(64)),
    tf.keras.layers.Dense(1),
])
        
model.add(tf.keras.layers.Flatten())
model.add(tf.keras.layers.Dense(1))
model.add(tf.keras.layers.Dense(1))
model.summary()

"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
: Optimizer
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
optimizer = tf.keras.optimizers.Nadam( learning_rate=0.0001, beta_1=0.9, beta_2=0.999, epsilon=1e-07, name='Nadam' )

"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
: Loss Fn
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""                               
lossfn = tf.keras.losses.MeanSquaredLogarithmicError(reduction=tf.keras.losses.Reduction.AUTO, name='mean_squared_logarithmic_error')

"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
: Model Summary
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
model.compile(optimizer=optimizer, loss=lossfn)

train_spec = tf.estimator.TrainSpec(input_fn=train_input_fn, max_steps=1000)
eval_spec = tf.estimator.EvalSpec(input_fn=eval_input_fn)

est_model = tf.keras.estimator.model_to_estimator(keras_model=model)
tf.estimator.train_and_evaluate(est_model, train_spec, eval_spec)

... Sample

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1 General Grievance