# How to represent Data for Neural Networks?

## ### Description

Tensors are being used by all current machine learning systems as their basic data structure. Tensors are fundamental to the field. A tensor is a container for data at its core. That is nearly continuously numerical data. Therefore, it’s a container for numbers. We can be previously familiar with matrices that are 2D tensors. The tensors are a simplification of matrices to an arbitrary number of dimensions. A dimension is often Scalars (0D tensors) in the context of tensors.

A tensor, which covers only one number, is called a scalar. It also called scalar tensor, 0-dimensional tensor, or 0D tensor. A float32 or float64 number is a scalar tensor in Numpy. We may display the number of axes of a Numpy tensor through the ndim attribute. A tensor’s number of axes is also called its rank.

Here’s a Numpy scalar:

>>> import numpy as np

>>> x = np.array(12)

>>> x

array(12)

>>> x.ndim

0

### Vectors (1D tensors)

A vector is an array of numbers. It also called 1D tensor. A 1D tensor is supposed to have just one axis. Following is a Numpy vector:

>>> x = np.array ([12, 3, 6, 14])

>>> x

array ([12, 3, 6, 14])

>>> x.ndim

1

This vector is called a 5-dimensional vector due to having five entries. A 5D vector has five dimensions along its axis & only one axis. A 5D tensor can have any number of dimensions along each axis and has five axes. Dimensionality may denote either the number of entries along a specific axis or the number of axes in a tensor. That may be confusing at times. It’s technically more correct to talk about a tensor of rank 5 in the latter case. The rank of a tensor is being the number of axes. But the unclear notation 5D tensor is common irrespective.

### Matrices (2D tensors)

A matrix is an array of vectors. It is also called 2D tensor. Often referred to rows and columns, a matrix has two axes. We can visually understand a matrix as a rectangular grid of numbers. This is a Numpy matrix:>>> x = np.array ([ [5, 78, 2, 34, 0],

[6, 79, 3, 35, 1],

[7, 80, 4, 36, 2] ] )

>>> x.ndim

2

From the first axis, the entries are called the rows. The columns are entries from the second axis. In the earlier example, [5, 78, 2, 34, 0] is the first row of x, and [5, 6, 7] is the first column.

### 3D and higher-dimensional tensors

We obtain a 3D tensor, which we can visually interpret as a cube of numbers, if we pack such matrices in a new array. Following is a Numpy 3D tensor:

>>> x = np.array([[[5, 78, 2, 34, 0],

[6, 79, 3, 35, 1],

[7, 80, 4, 36, 2]],

[[5, 78, 2, 34, 0],

[6, 79, 3, 35, 1],

[7, 80, 4, 36, 2]],

[[5, 78, 2, 34, 0],

[6, 79, 3, 35, 1],

[7, 80, 4, 36, 2]]])

>>> x.ndim

3

We can create a 4D tensor by packing 3D tensors in an array and so on. We’ll generally manipulate tensors that are 0D to 4D in deep learning.

### Key qualities

We can define a tensor by three key attributes:

1. Number of axes: For example, a 3D tensor has three axes, and a matrix has two axes. This is also called the tensor’s ndim in Python libraries for instance Numpy.
2. Shape: This is a tuple of integers. It defines how many dimensions the tensor has along each axis. For case, the previous matrix example has shape (3, 5), and the 3D tensor example has shape (3, 3, 5). A scalar has an empty shape, () and a vector has a shape with a single element, such as (5,).
3. Data type: This is the type of the data limited in the tensor. For example, a tensor’s type may be float32. It may also be uint8, float64, and so on. We may see a char tensor on rare times. Tensors live in pre-allocated, adjoining memory segment, therefore, string tensors don’t exist in Numpy or in most other libraries.