The following notes are based on the "Introduction to Applied Linear Algebra - Vectors, Matrices, and Least Squares" book by Stephen Boyd and Lieven Vandenberghe, available freely online here. It is a very useful resource for people interested in all things Data Science and Artificial Intelligence, as both of those fields are basically various shades of statistics intertwined with and built from various shades of applied mathematics, where linear lagebra plays a dominating role.
A vector is an ordered finite list of numbers. Usually written as:
$$ \begin{bmatrix}-1.1\\0.0\\3.6\\-7.2\end{bmatrix} \ or \ \begin{pmatrix}-1.1\\0.0\\3.6\\-7.2\end{pmatrix} \ or \ \begin{bmatrix} -1.1 & 0.0 & 3.6 & -7.2 \end{bmatrix}$$
A vector consists of elements, also called components, entries, or coefficients. In the example above, vector elements are -1.1, 0.0, 3.6, and -7.2. (As you can notice in Linear Algebra, as well as in Data Science or Machine Learning, which use Linear Algebra heavily, a lot of things have multiple names, sometimes it gets very confusing.)
The size, aka dimension or length, of the vector is the number of elements it contains. In the example above, the size of each vector is 4. The vector of size n is called an n-vector. If n=1, we have a 1-vector, for example [3.6]. It is important to note that the 1-vector [3.6] is not the same as a scalar 3.6, as scalars have only magnitude, but vectors, even the one dimensional ones, have magnitude and direction. However, in linear algebra, 1-vectors are often treated as scalars, while other fields, such as physics, they are never treated as such.
It is convenient to denote vectors with letters and its elements with subscripts, for example a vector a will have elements \(a_1\), \(a_2\), \(a_3\), etc. We can write an element of vector as \(a_i\), where i will go from 1 to the size of the vector, n.