Guo, Yejun
							
						 
						
							
							
							
								
							
								8ce9d88f93 
								
							
								 
							
						 
						
							
							
								
								dnn/native: add native support for divide  
							
							 
							
							 
							
							
								
							
							
							it can be tested with model file generated with below python script:
import tensorflow as tf
import numpy as np
import imageio
in_img = imageio.imread('input.jpg')
in_img = in_img.astype(np.float32)/255.0
in_data = in_img[np.newaxis, :]
x = tf.placeholder(tf.float32, shape=[1, None, None, 3], name='dnn_in')
z1 = 2 / x
z2 = 1 / z1
z3 = z2 / 0.25 + 0.3
z4 = z3 - x * 1.5 - 0.3
y = tf.identity(z4, name='dnn_out')
sess=tf.Session()
sess.run(tf.global_variables_initializer())
graph_def = tf.graph_util.convert_variables_to_constants(sess, sess.graph_def, ['dnn_out'])
tf.train.write_graph(graph_def, '.', 'image_process.pb', as_text=False)
print("image_process.pb generated, please use \
path_to_ffmpeg/tools/python/convert.py to generate image_process.model\n")
output = sess.run(y, feed_dict={x: in_data})
imageio.imsave("out.jpg", np.squeeze(output))
Signed-off-by: Guo, Yejun <yejun.guo@intel.com> 
							
						 
						5 years ago  
					 
				
					
						
							
							
								   Guo, Yejun
							
						 
						
							
							
							
								
							
								ef79408e97 
								
							
								 
							
						 
						
							
							
								
								dnn/native: add native support for 'mul'  
							
							 
							
							 
							
							
								
							
							
							it can be tested with model file generated from above python script:
import tensorflow as tf
import numpy as np
import imageio
in_img = imageio.imread('input.jpg')
in_img = in_img.astype(np.float32)/255.0
in_data = in_img[np.newaxis, :]
x = tf.placeholder(tf.float32, shape=[1, None, None, 3], name='dnn_in')
z1 = 0.5 + 0.3 * x
z2 = z1 * 4
z3 = z2 - x - 2.0
y = tf.identity(z3, name='dnn_out')
sess=tf.Session()
sess.run(tf.global_variables_initializer())
graph_def = tf.graph_util.convert_variables_to_constants(sess, sess.graph_def, ['dnn_out'])
tf.train.write_graph(graph_def, '.', 'image_process.pb', as_text=False)
print("image_process.pb generated, please use \
path_to_ffmpeg/tools/python/convert.py to generate image_process.model\n")
output = sess.run(y, feed_dict={x: in_data})
imageio.imsave("out.jpg", np.squeeze(output))
Signed-off-by: Guo, Yejun <yejun.guo@intel.com> 
							
						 
						5 years ago  
					 
				
					
						
							
							
								   Guo, Yejun
							
						 
						
							
							
							
								
							
								6aa7e07e7c 
								
							
								 
							
						 
						
							
							
								
								dnn/native: add native support for 'add'  
							
							 
							
							 
							
							
								
							
							
							It can be tested with the model file generated with below python script:
import tensorflow as tf
import numpy as np
import imageio
in_img = imageio.imread('input.jpg')
in_img = in_img.astype(np.float32)/255.0
in_data = in_img[np.newaxis, :]
x = tf.placeholder(tf.float32, shape=[1, None, None, 3], name='dnn_in')
z1 = 0.039 + x
z2 = x + 0.042
z3 = z1 + z2
z4 = z3 - 0.381
z5 = z4 - x
y = tf.math.maximum(z5, 0.0, name='dnn_out')
sess=tf.Session()
sess.run(tf.global_variables_initializer())
graph_def = tf.graph_util.convert_variables_to_constants(sess, sess.graph_def, ['dnn_out'])
tf.train.write_graph(graph_def, '.', 'image_process.pb', as_text=False)
print("image_process.pb generated, please use \
path_to_ffmpeg/tools/python/convert.py to generate image_process.model\n")
output = sess.run(y, feed_dict={x: in_data})
imageio.imsave("out.jpg", np.squeeze(output))
Signed-off-by: Guo, Yejun <yejun.guo@intel.com> 
							
						 
						5 years ago  
					 
				
					
						
							
							
								   Guo, Yejun
							
						 
						
							
							
							
								
							
								ffa1561608 
								
							
								 
							
						 
						
							
							
								
								dnn_backend_native_layer_mathbinary: add sub support  
							
							 
							
							 
							
							
								
							
							
							more math binary operations will be added here
Signed-off-by: Guo, Yejun <yejun.guo@intel.com> 
							
						 
						5 years ago  
					 
				
					
						
							
							
								   Guo, Yejun
							
						 
						
							
							
							
								
							
								dff39ea9f0 
								
							
								 
							
						 
						
							
							
								
								dnn: add tf.nn.conv2d support for native model  
							
							 
							
							 
							
							
								
							
							
							Unlike other tf.*.conv2d layers, tf.nn.conv2d does not create many
nodes (within a scope) in the graph, it just acts like other layers.
tf.nn.conv2d only creates one node in the graph, and no internal
nodes such as 'kernel' are created.
The format of native model file is also changed, a flag named
has_bias is added, so change the version number.
Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
Signed-off-by: Pedro Arthur <bygrandao@gmail.com> 
							
						 
						6 years ago  
					 
				
					
						
							
							
								   Guo, Yejun
							
						 
						
							
							
							
								
							
								b2683c66b2 
								
							
								 
							
						 
						
							
							
								
								libavfilter/dnn: add layer maximum for native mode.  
							
							 
							
							 
							
							
								
							
							
							The reason to add this layer is that it is used by srcnn in vf_sr.
This layer is currently ignored in native mode. After this patch,
we can add multiple outputs support for native mode.
Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
Signed-off-by: Pedro Arthur <bygrandao@gmail.com> 
							
						 
						6 years ago  
					 
				
					
						
							
							
								   Guo, Yejun
							
						 
						
							
							
							
								
							
								022f50d3fe 
								
							
								 
							
						 
						
							
							
								
								libavfilter/dnn: add header into native model file  
							
							 
							
							 
							
							
								
							
							
							Signed-off-by: Guo, Yejun <yejun.guo@intel.com>
Signed-off-by: Pedro Arthur <bygrandao@gmail.com> 
							
						 
						6 years ago