본문 바로가기

Deep Learning lecture14

ML lec 04 - multi-variable linear regression (*new) Recap · Hypothesis -W와 b를 학습하게 된다. · Cost function -실제값과 예측값 사이의 차이를 제곱하여 합을 한 것이다. · Gradient Descent Algorithm Predicting exam score : regression using three inputs(x) x (hours) y (score) 10 90 9 80 3 2 2 60 11 40 Predicting exam score : regression using three inputs(x1, x2, x3) x1(quiz 1) x2(quiz 2) x3(midterm 1) Y (final) 73 80 75 152 93 88 93 185 89 91 90 180 96 98 100 196 73 96 70 142 Hypo.. 2020. 4. 27.
ML lab 03 - Linear Regression 의 cost 최소화의 TensorFlow 구현 (new) lab-03-2-minimizing_cost_gradient_update.py # Lab 3 Minimizing Cost import tensorflow as tf tf.set_random_seed(777) # for reproducibility x_data = [1, 2, 3] y_data = [1, 2, 3] # Try to find values for W and b to compute y_data = W * x_data # We know that W should be 1 # But let's use TensorFlow to figure it out W = tf.Variable(tf.random_normal([1]), name="weight") X = tf.placeholder(tf.float32) .. 2020. 4. 27.
ML lec 03 - Linear Regression의 cost 최소화 알고리즘의 원리 설명 Hypothesis and Cost *cost function - mse(mean square error) Simplified hypothesis *설명을 위해 b를 생략 (간략화) What cost(W) looks like? X Y 1 1 2 2 3 3 · W=1, cost(W)=0 1/3((1*1-1)^2+(1*2-2)^2+(1*3-3)^2) · W=0, cost(W)=4.67 1/3((0*1-1)^2+(0*2-2)^2+(0*3-3)^2) · W=2, cost(W)= ? What cost(W) looks like? · W=1, cost(W)=0 · W=0, cost(W)=4.67 · W=2, cost(W)=4.67 -목표는 cost가 최소화 되는 지점(W)를 기계적으로 찾는 것이다. 그 지점을 찾는 .. 2020. 4. 26.
ML lec 02 - TensorFlow로 간단한 Linear regression을 구현 Review (H(x) = predicted value, y = true value) W와 b를 조정해서 cost를 minimize한다. TensorFlow Mechanics 1. Build graph using 2. session run - 그래프 실행 3. graph update 1. Build graph using TF operations # X and Y data x_train = [1, 2, 3] y_train = [1, 2, 3] # Try to find values for W and b to compute y_data = x_data * W + b # We know that W should be 1 and b should be 0 # But let TensorFlow figure it out.. 2020. 4. 25.