# 吴恩达深度学习公开课第二周编程练习1

## Python Basics with Numpy

Posted by WeiYang on 2017-09-20

# 1 - Building basic functions with numpy

## 1.1 - sigmoid function, np.exp()

Exercise: Build a function that returns the sigmoid of a real number x. Use math.exp(x) for the exponential function.

Exercise: Implement the sigmoid function using numpy.

Exercise: Implement the function sigmoid_grad() to compute the gradient of the sigmoid function with respect to its input x. The formula is:
$sigmoid\_derivative(x) = \sigma’(x) = \sigma(x) (1 - \sigma(x))$

## 1.3 - Reshaping arrays

Exercise: Implement image2vector() that takes an input of shape (length, height, 3) and returns a vector of shape (length*height*3, 1).

## 1.4 - Normalizing rows

Exercise: Implement normalizeRows() to normalize the rows of a matrix. After applying this function to an input matrix x, each row of x should be a vector of unit length (meaning length 1).

## 1.5 - Broadcasting and the softmax function

Exercise: Implement a softmax function using numpy. You can think of softmax as a normalizing function used when your algorithm needs to classify two or more classes. You will learn more about softmax in the second course of this specialization.

# 2 - Vectorization

## 2.1 - Implement the L1 and L2 loss functions

Exercise: Implement the numpy vectorized version of the L1 loss. You may find the function abs(x) (absolute value of x) useful.

Exercise: Implement the numpy vectorized version of the L2 loss. There are several way of implementing the L2 loss but you may find the function np.dot() useful. As a reminder, if $$x = [x_1, x_2, …, x_n]$$, then np.dot(x,x) = $$\sum_{j=0}^n x_j^{2}$$