The best thing to do when starting something new is to start doing something simple.

In our case lets do linear regression in which we will try to predict the price of a house with its size. Yes we will use some falsified data but that’s fine.

Well first things first, every thing in tensor flow is in the form of an array, so we begin initialising our data as arrays

#FOR LR Area=np.array([[987],[452],[876],[201],[349],[195],[1000],[1501],[555],[724], [652],[328],[895]]) price=np.array([[1974],[904],[1752],[402],[698],[390],[2000],[3002],[1110], [1448],[1304],[656],[1790]])

Okay so we have area and prices that is our x and y both in the form of a numpy array.

Now the next step is a very crucial step, in this we will determine

- Number of iterations
- Learning rate
- Cost Function

Why the above 3 steps? well we do it to find the smallest error. We make use of Gradient Descent

learning_rate = 0.01 training_epochs = 1000 cost_history = np.empty(shape=[1],dtype=float) X = tf.placeholder(tf.float32,[None,n_dim]) Y = tf.placeholder(tf.float32,[None,1]) W = tf.Variable(tf.ones([n_dim,1])) init = tf.initialize_all_variables() y_ = tf.matmul(X, W) cost = tf.reduce_mean(tf.square(y_ - Y)) training_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)

Now an important thing to note here is that nothing here was actually executed. Tensor flow objects are only executed when they are explicitly called. So we need to explicitly call it. Till then we need to define place holder for variables that will be a part of it.

So for example we need x,y and w for

y=W*x+b

Finally let us execute tensor flow

sess = tf.Session() sess.run(init) for epoch in range(training_epochs): sess.run(training_step,feed_dict={X:train_x,Y:train_y}) cost_history = np.append(cost_history, sess.run(cost,feed_dict={X: train_x,Y: train_y}))

This will actually train the model and find the cost function.

You can find the code for this on git hub here.

If you are looking for something with a bigger data set , you can find the code for regression on the Boston data set using tensor flow here