date: 2024-05-15
title: numpy
status: TOBECONTINUED
author:
- Dr. Eric Zhao
tags:
- Python
- NOTE
created: 2024-05-15T14:24
updated: 2024-06-11T01:16
publish: True
numpy
numpy array
# integer array:
np.array([1, 4, 2, 5, 3])
Output:
array([1, 4, 2, 5, 3])
np.array([3.14, 4, 2, 3])
Output:
array([3.14, 4. , 2. , 3. ])
dtype
keyword:np.array([1, 2, 3, 4], dtype='float32')
Output:
array([1., 2., 3., 4.], dtype=float32)
# nested lists result in multi-dimensional arrays
a = np.array([range(i, i + 3) for i in [2, 4, 6]])
a
Output
array([[2, 3, 4], [4, 5, 6], [6, 7, 8]])
Especially for larger arrays, it is more efficient to create arrays from scratch using routines built into NumPy.
np.zeros(<createshape>, dtype=<datatype>)
np.ones(<createshape>, dtype=<datatype>)
np.full(<createshape>,<fill number>)
np.zeros(10, dtype=int)
Output
array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0])
np.ones((3, 5), dtype=float)
Output
array([[1., 1., 1., 1., 1.], [1., 1., 1., 1., 1.], [1., 1., 1., 1., 1.]])
np.full((3, 5), 3.14)
array([[3.14, 3.14, 3.14, 3.14, 3.14],
[3.14, 3.14, 3.14, 3.14, 3.14],
[3.14, 3.14, 3.14, 3.14, 3.14]])
np.arange(start, end, step) #Starting at start, ending at end, stepping by step
np.linspace(start, end, step) #Starting at start, ending at end, Divide by step
np.random.random(<shape>) #Uniformly distributed
np.random.normal(a, b, <shape>) #Construct range from a to b normally distributed matrix
np.random.randint(a, b, <shape>) #random integers in the interval [0, 10)
np.eye(n) # Construct identity matrix
# Starting at 0, ending at 20, stepping by 2
# (this is similar to the built-in range() function)
np.arange(0, 20, 2)
array([ 0, 2, 4, 6, 8, 10, 12, 14, 16, 18])
# Create an array of five values evenly spaced between 0 and 1
np.linspace(0, 1, 5)
array([0. , 0.25, 0.5 , 0.75, 1. ])
# random values between 0 and 1
np.random.random((3, 3))
array([[0.37598881, 0.21179029, 0.70865879],
[0.95135829, 0.7208924 , 0.90282807],
[0.94199024, 0.31798853, 0.96914883]])
# with mean 0 and standard deviation 1
np.random.normal(0, 1, (3, 3))
array([[-0.60396295, -1.2562087 , -0.70299877],
[ 1.19554453, 0.16795621, -1.08435634],
[-1.53097616, -0.88395816, -0.16151936]])
np.random.randint(0, 10, (3, 3))
array([[6, 1, 5],
[5, 8, 9],
[9, 6, 4]])
np.eye(3)
array([[1., 0., 0.],
[0., 1., 0.],
[0., 0., 1.]])
# The values will be whatever happens to already exist at that memory location
np.empty(3)
array([1., 1., 1.])
Note that when constructing an array, they can be specified using a string:
np.zeros(10, dtype='int16')
Or using the associated NumPy object:
np.zeros(10, dtype=np.int16)
Data type | Description |
---|---|
bool_ |
Boolean (True or False) stored as a byte |
int_ |
Default integer type (same as C long ; normally either int64 or int32 ) |
intc |
Identical to C int (normally int32 or int64 ) |
intp |
Integer used for indexing (same as C ssize_t ; normally either int32 or int64 ) |
int8 |
Byte (-128 to 127) |
int16 |
Integer (-32768 to 32767) |
int32 |
Integer (-2147483648 to 2147483647) |
int64 |
Integer (-9223372036854775808 to 9223372036854775807) |
uint8 |
Unsigned integer (0 to 255) |
uint16 |
Unsigned integer (0 to 65535) |
uint32 |
Unsigned integer (0 to 4294967295) |
uint64 |
Unsigned integer (0 to 18446744073709551615) |
float_ |
Shorthand for float64 . |
float16 |
Half precision float: sign bit, 5 bits exponent, 10 bits mantissa |
float32 |
Single precision float: sign bit, 8 bits exponent, 23 bits mantissa |
float64 |
Double precision float: sign bit, 11 bits exponent, 52 bits mantissa |
complex_ |
Shorthand for complex128 . |
complex64 |
Complex number, represented by two 32-bit floats |
complex128 |
Complex number, represented by two 64-bit floats |
This section will present several examples of using NumPy array manipulation to access data and subarrays, and to split, reshape, and join the arrays.
Get to know them well!
We'll cover a few categories of basic array manipulations here:
First let's discuss some useful array attributes.
We'll start by defining three random arrays, a one-dimensional, two-dimensional, and three-dimensional array.
We'll use NumPy's random number generator, which we will seed with a set value in order to ensure that the same random arrays are generated each time this code is run:
import numpy as np
np.random.seed(0) # seed for reproducibility
x1 = np.random.randint(10, size=6) # One-dimensional array
x2 = np.random.randint(10, size=(3, 4)) # Two-dimensional array
x3 = np.random.randint(10, size=(3, 4, 5)) # Three-dimensional array
Each array has attributes ndim
(the number of dimensions), shape
(the size of each dimension), and size
(the total size of the array):
print("x3 ndim: ", x3.ndim)
print("x3 shape:", x3.shape)
print("x3 size: ", x3.size)
x3 ndim: 3
x3 shape: (3, 4, 5)
x3 size: 60
Another useful attribute is the dtype
, the data type of the array:
print("dtype:", x3.dtype)
dtype: int64
Other attributes include itemsize
, which lists the size (in bytes) of each array element, and nbytes
, which lists the total size (in bytes) of the array:
print("itemsize:", x3.itemsize, "bytes")
print("nbytes:", x3.nbytes, "bytes")
itemsize: 8 bytes
nbytes: 480 bytes
In general, we expect that nbytes
is equal to itemsize
times size
.
If you are familiar with Python's standard list indexing, indexing in NumPy will feel quite familiar.
In a one-dimensional array, the
x1
array([5, 0, 3, 3, 7, 9])
x1[0]
5
To index from the end of the array, you can use negative indices:
x1[-1]
9
In a multi-dimensional array, items can be accessed using a comma-separated tuple of indices:
x2
array([[3, 5, 2, 4],
[7, 6, 8, 8],
[1, 6, 7, 7]])
x2[2, 0]
1
x2[2, -1]
7
Q4: What is x2[-1, -2]?
Values can also be modified using any of the above index notation:
x2[0, 0] = 12
x2
array([[12, 5, 2, 4],
[ 7, 6, 8, 8],
[ 1, 6, 7, 7]])
Keep in mind that, unlike Python lists, NumPy arrays have a fixed type.
This means, for example, that if you attempt to insert a floating-point value to an integer array, the value will be silently truncated. Don't be caught unaware by this behavior!
x1[0] = 3.14159 # this will be truncated!
x1
array([3, 0, 3, 3, 7, 9])
Just as we can use square brackets to access individual array elements, we can also use them to access subarrays with the slice notation, marked by the colon (:
) character.
The NumPy slicing syntax follows that of the standard Python list; to access a slice of an array x
, use this:
x[start:stop:step]
If any of these are unspecified, they default to the values start=0
, stop=
size of dimension
, step=1
.
We'll take a look at accessing sub-arrays in one dimension and in multiple dimensions.
x = np.arange(10)
x
array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
x[:5] # first five elements
array([0, 1, 2, 3, 4])
x[5:] # elements after index 5
array([5, 6, 7, 8, 9])
x[4:7] # middle sub-array
array([4, 5, 6])
x[::2] # every other element
array([0, 2, 4, 6, 8])
x[1::2] # every other element, starting at index 1
A potentially confusing case is when the step
value is negative.
In this case, the array is reversed. start
is the starting index of the reversed list, while stop
is the stopping index
This becomes a convenient way to reverse an array:
x[::-1] # all elements, reversed
x[5::-1] # reversed list starting from index 5
Q5: What is x-2-2?
Multi-dimensional slices work in the same way, with multiple slices separated by commas.
For example:
x2
x2[:2, :3] # first two rows, first three columns
Q5: What is x2[:4, :5]?
x2[:3, ::2] # all rows, every other column
Finally, subarray dimensions can even be reversed together:
x2[::-1, ::-1] # reverse rows and columns together
One commonly needed routine is accessing of single rows or columns of an array.
This can be done by combining indexing and slicing, using an empty slice marked by a single colon (:
):
print(x2[:, 0]) # first column of x2
x2[:,0].shape
print(x2[0, :]) # first row of x2
In the case of row access, the empty slice can be omitted for a more compact syntax:
print(x2[0]) # equivalent to x2[0, :]
One important–and extremely useful–thing to know about array slices is that they return views rather than copies of the array data.
This is one area in which NumPy array slicing differs from Python list slicing: in lists, slices will be copies.
Consider our two-dimensional array from before:
print(x2)
Let's extract a
x2_sub = x2[:2, :2]
print(x2_sub)
Now if we modify this subarray, we'll see that the original array is changed! Observe:
x2_sub[0, 0] = 99
print(x2_sub)
print(x2)
This default behavior is actually quite useful: it means that when we work with large datasets, we can access and process pieces of these datasets without the need to copy the underlying data buffer.
Despite the nice features of array views, it is sometimes useful to instead explicitly copy the data within an array or a subarray. This can be most easily done with the copy()
method:
x2_sub_copy = x2[:2, :2].copy()
print(x2_sub_copy)
If we now modify this subarray, the original array is not touched:
x2_sub_copy[0, 0] = 42
print(x2_sub_copy)
print(x2)
Another useful type of operation is reshaping of arrays.
The most flexible way of doing this is with the reshape
method.
For example, if you want to put the numbers 1 through 9 in a
grid = np.arange(1, 10).reshape((3, 3))
print(grid)
Note that for this to work, the size of the initial array must match the size of the reshaped array.
Where possible, the reshape
method will use a no-copy view of the initial array, but with non-contiguous memory buffers this is not always the case.
Another common reshaping pattern is the conversion of a one-dimensional array into a two-dimensional row or column matrix.
This can be done with the reshape
method, or more easily done by making use of the newaxis
keyword within a slice operation:
x = np.array([1, 2, 3])
x
# row vector via newaxis
x[np.newaxis, :]
# column vector via reshape
x.reshape((3, 1))
Q6: What has been changed after the reshape method?
# column vector via newaxis
x = np.array([1, 2, 3])
x[:, np.newaxis]
All of the preceding routines worked on single arrays. It's also possible to combine multiple arrays into one, and to conversely split a single array into multiple arrays. We'll take a look at those operations here.
Concatenation, or joining of two arrays in NumPy, is primarily accomplished using the routines np.concatenate
, np.vstack
, and np.hstack
.
np.concatenate
takes a tuple or list of arrays as its first argument, as we can see here:
x = np.array([1, 2, 3])
y = np.array([3, 2, 1])
np.concatenate([x, y])
You can also concatenate more than two arrays at once:
z = [99, 99, 99]
print(np.concatenate([x, y, z]))
It can also be used for two-dimensional arrays:
grid = np.array([[1, 2, 3],
[4, 5, 6]])
# concatenate along the first axis
np.concatenate([grid, grid])
# concatenate along the second axis (zero-indexed)
np.concatenate([grid, grid], axis=1)
For working with arrays of mixed dimensions, it can be clearer to use the np.vstack
(vertical stack) and np.hstack
(horizontal stack) functions:
x = np.array([1, 2, 3])
grid = np.array([[9, 8, 7],
[6, 5, 4]])
# vertically stack the arrays
np.vstack([x, grid])
Q7:What may happen if we use hstack function in the previous cell?
# horizontally stack the arrays
y = np.array([[99],
[99]])
np.hstack([grid, y])
Similary, np.dstack
will stack arrays along the third axis.
The opposite of concatenation is splitting, which is implemented by the functions np.split
, np.hsplit
, and np.vsplit
. For each of these, we can pass a list of indices giving the split points:
x = [1, 2, 3, 99, 99, 3, 2, 1]
x1, x2, x3 = np.split(x, [3, 5]) # split at third position and fifth position.
print(x1, x2, x3)
Notice that N split-points, leads to N + 1 subarrays.
The related functions np.hsplit
and np.vsplit
are similar:
grid = np.arange(16).reshape((4, 4))
grid
upper, lower = np.vsplit(grid, [2])
print(upper)
print(lower)
left, right = np.hsplit(grid, [2])
print(left)
print(right)
Similarly, np.dsplit
will split arrays along the third axis.
Computation on NumPy arrays can be very fast, or it can be very slow.
The key to making it fast is to use vectorized operations, generally implemented through NumPy's universal functions (ufuncs).
This section motivates the need for NumPy's ufuncs, which can be used to make repeated calculations on array elements much more efficient.
It then introduces many of the most common and useful arithmetic ufuncs available in the NumPy package.
The relative slowness of Python generally manifests itself in situations where many small operations are being repeated – for instance looping over arrays to operate on each element.
For example, imagine we have an array of values and we'd like to compute the reciprocal (倒数) of each.
A straightforward approach might look like this:
import numpy as np
np.random.seed(0)
def compute_reciprocals(values):
output = np.empty(len(values))
for i in range(len(values)):
output[i] = 1.0 / values[i]
return output
values = np.random.randint(1, 10, size=5)
compute_reciprocals(values)
This implementation probably feels fairly natural to someone from, say, a C or Java background.
But if we measure the execution time of this code for a large input, we see that this operation is very slow, perhaps surprisingly so!
We'll benchmark this with IPython's %timeit
:
big_array = np.random.randint(1, 100, size=1000000)
%timeit compute_reciprocals(big_array)
It takes several seconds to compute these million operations and to store the result!
When even cell phones have processing speeds measured in Giga-FLOPS (i.e., billions of numerical operations per second), this seems almost absurdly slow.
It turns out that the bottleneck here is not the operations themselves, but the type-checking and function dispatches that CPython must do at each cycle of the loop.
Each time the reciprocal is computed, Python first examines the object's type and does a dynamic lookup of the correct function to use for that type.
If we were working in compiled code instead, this type specification would be known before the code executes and the result could be computed much more efficiently.
For many types of operations, NumPy provides a convenient interface into just this kind of statically typed, compiled routine. This is known as a vectorized operation.
This can be accomplished by simply performing an operation on the array, which will then be applied to each element.
This vectorized approach is designed to push the loop into the compiled layer that underlies NumPy, leading to much faster execution.
Compare the results of the following two:
print(compute_reciprocals(values))
print(1.0 / values)
Looking at the execution time for our big array, we see that it completes orders of magnitude faster than the Python loop:
%timeit (1.0 / big_array)
Vectorized operations in NumPy are implemented via ufuncs, whose main purpose is to quickly execute repeated operations on values in NumPy arrays.
Ufuncs are extremely flexible – before we saw an operation between a scalar and an array, but we can also operate between two arrays:
np.arange(5)
np.arange(5) / np.arange(1, 6)
And ufunc operations are not limited to one-dimensional arrays–they can also act on multi-dimensional arrays as well:
x = np.arange(9).reshape((3, 3))
2 ** x
Computations using vectorization through ufuncs are nearly always more efficient than their counterpart implemented using Python loops, especially as the arrays grow in size.
Any time you see such a loop in a Python script, you should consider whether it can be replaced with a vectorized expression.
Ufuncs exist in two flavors: unary ufuncs, which operate on a single input, and binary ufuncs, which operate on two inputs.
We'll see examples of both these types of functions here.
NumPy's ufuncs feel very natural to use because they make use of Python's native arithmetic operators.
The standard addition, subtraction, multiplication, and division can all be used:
x = np.arange(4)
print("x =", x)
print("x + 5 =", x + 5)
print("x - 5 =", x - 5)
print("x * 2 =", x * 2)
print("x / 2 =", x / 2)
print("x // 2 =", x // 2) # floor division
There is also a unary ufunc for negation, and a **
operator for exponentiation, and a %
operator for modulus:
print("-x = ", -x)
print("x ** 2 = ", x ** 2)
print("x % 2 = ", x % 2)
In addition, these can be strung together however you wish, and the standard order of operations is respected:
-(0.5*x + 1) ** 2
Each of these arithmetic operations are simply convenient wrappers around specific functions built into NumPy; for example, the +
operator is a wrapper for the add
function:
np.add(x, 2)
The following table lists the arithmetic operators implemented in NumPy:
Operator | Equivalent ufunc | Description |
---|---|---|
+ |
np.add |
Addition (e.g., 1 + 1 = 2 ) |
- |
np.subtract |
Subtraction (e.g., 3 - 2 = 1 ) |
- |
np.negative |
Unary negation (e.g., -2 ) |
* |
np.multiply |
Multiplication (e.g., 2 * 3 = 6 ) |
/ |
np.divide |
Division (e.g., 3 / 2 = 1.5 ) |
// |
np.floor_divide |
Floor division (e.g., 3 // 2 = 1 ) |
** |
np.power |
Exponentiation (e.g., 2 ** 3 = 8 ) |
% |
np.mod |
Modulus/remainder (e.g., 9 % 4 = 1 ) |
Just as NumPy understands Python's built-in arithmetic operators, it also understands Python's built-in absolute value function:
x = np.array([-2, -1, 0, 1, 2])
abs(x)
The corresponding NumPy ufunc is np.absolute
, which is also available under the alias np.abs
:
np.absolute(x)
np.abs(x)
This ufunc can also handle complex data, in which the absolute value returns the magnitude:
x = np.array([3 - 4j, 4 - 3j, 2 + 0j, 0 + 1j])
np.abs(x)
NumPy provides a large number of useful ufuncs, and some of the most useful for the data scientist are the trigonometric functions.
We'll start by defining an array of angles:
Q8: What is the Chinese of Trigonometric functions
?
theta = np.linspace(0, np.pi, 3)
print(theta)
Now we can compute some trigonometric functions on these values:
print("theta = ", theta)
print("sin(theta) = ", np.sin(theta))
print("cos(theta) = ", np.cos(theta))
print("tan(theta) = ", np.tan(theta))
The values are computed to within machine precision, which is why values that should be zero do not always hit exactly zero.
Inverse trigonometric functions are also available:
x = [-1, 0, 1]
print("x = ", x)
print("arcsin(x) = ", np.arcsin(x))
print("arccos(x) = ", np.arccos(x))
print("arctan(x) = ", np.arctan(x))
Another common type of operation available in a NumPy ufunc are the exponentials:
x = [1, 2, 3]
print("x =", x)
print("e^x =", np.exp(x))
print("2^x =", np.exp2(x))
print("3^x =", np.power(3, x))
The inverse of the exponentials, the logarithms, are also available.
The basic np.log
gives the natural logarithm; if you prefer to compute the base-2 logarithm or the base-10 logarithm, these are available as well:
x = [1, 2, 4, 10]
print("x =", x)
print("ln(x) =", np.log(x))
print("log2(x) =", np.log2(x))
print("log10(x) =", np.log10(x))
There are also some specialized versions that are useful for maintaining precision with very small input:
x = [0, 0.001, 0.01, 0.1]
print("exp(x) - 1 =", np.expm1(x))
print("log(1 + x) =", np.log1p(x))
When x
is very small, these functions give more precise values than if the raw np.log
or np.exp
were to be used.
There are many, many more ufuncs available in both NumPy and scipy.special
.
Because the documentation of these packages is available online, a web search along the lines of "gamma function python" will generally find the relevant information.