Implementing DetectionPolicy with existing Inceptionv3 model#41
Implementing DetectionPolicy with existing Inceptionv3 model#41henrypinkard wants to merge 20 commits intofuego-dev:detection_policyfrom henrypinkard:master
Conversation
…g now internal to detectionpolicy
| downloadDir = 'XXX/orig' | ||
| archive_storage_bucket = "fuego-firecam-a" | ||
|
|
||
| server_ip_and_port = 'localhost:8500' #depends on the specific inference server running on GCP |
There was a problem hiding this comment.
"localhost" means do inference locally on same machine. Can also be an IP address to an inference server
| dataset_dir, split_name, shard_id, numShards) | ||
|
|
||
| with tf.python_io.TFRecordWriter(output_filename) as tfrecord_writer: | ||
| with tf.io.TFRecordWriter(output_filename) as tfrecord_writer: |
There was a problem hiding this comment.
Shouldn't change functionality, just switching from deprecated interface
kinshuk
left a comment
There was a problem hiding this comment.
I'm curious to know how the model produced by this compares with our current inception model.
| import datetime | ||
| import math | ||
|
|
||
| import docker |
There was a problem hiding this comment.
Please add the standard license and copyright header at the top of the file. Also, please add one or few lines comment describing the main purpose of this file.
| # tf.app.flags.DEFINE_string('server', server_ip_and_port, 'PredictionService host:port') | ||
| # channel = grpc.insecure_channel(tf.app.flags.FLAGS.server) |
There was a problem hiding this comment.
Should these two lines be deleted? If not, please describe when/how they are needed/useful.
| # tf.app.flags.DEFINE_string('server', server_ip_and_port, 'PredictionService host:port') | ||
| # channel = grpc.insecure_channel(tf.app.flags.FLAGS.server) | ||
| channel = grpc.insecure_channel(server_ip_and_port) | ||
| # grpc.secure_channel() |
There was a problem hiding this comment.
secure_channel definitely sounds better than insecure_channel. Please document what's preventing using secure version.
| docker pull tensorflow/serving:latest-gpu | ||
| sudo docker run -d --name serving_base tensorflow/serving:latest-gpu | ||
| # create intermediate dir | ||
| sudo docker exec serving_base mkdir -p /models/$NAME | ||
| sudo docker cp $MODEL_PATH serving_base:/models/$NAME/1 | ||
| sudo docker commit --change "ENV MODEL_NAME $NAME" serving_base $NAME"_serving" | ||
| sudo docker kill serving_base | ||
| sudo docker rm serving_base |
There was a problem hiding this comment.
Is this possible from Dockerfile? If so, what's the tradeoff of creating the image using this script vs. Dockerfile?
| @@ -0,0 +1,77 @@ | |||
| import glob | |||
There was a problem hiding this comment.
Please add the standard license and copyright header at the top of the file. Also, please add one or few lines comment describing the main purpose of this file.
| optArgs = [ | ||
| ["t", "trainPercentage", "percentage of data to use for training vs. validation (default 90)"] | ||
| ] |
There was a problem hiding this comment.
The code doesn't use this argument, so you can just remove this and use an empty list.
| val_steps = 100 #only needed for now because of a bug in tf2.0, which should be fixed in next version | ||
| #TODO: either set this to # of validation examples /batch size (i.e. figure out num validation examples) | ||
| #or upgrade to TF2.1 when its ready and automatically go thorugh the whole set |
There was a problem hiding this comment.
Please reference the bug number/URL in the comment.
I think these commits that aren't my username were because I forgot to change the default email in git on the VM template we used.