# Lebesgue Integration

Terminology: we say the limit of a sequence or sum, or Sup or Inf of a set exists if it is finite (and unique). If it is or we say the limit/Sup/Inf is defined, i.e. unbounded, but does not exist. In any other cases, e.g , we say it does not exist (or undefined).

Definition: A partition of a set is a collection of non-empty (pairwise) disjoint subsets of whose union equals .

Definition: A real-valued simple function is a function that takes a finite or countable number of real values (NOT extended real values including ), i.e. its range is finite or countable. Note that the definition does not put any restriction on the domain or codomain of the function.

For example defined as,

is a simple function.

A simple function on an interval can be defined as,

where are constants, such that and for , and is called the characteristic function of being defined as,

In words, the interval was partitioned into pairwise disjoint intervals s where .

The above definition is called the canonical representation of a simple function and is equivalent to,

The domain of a simple function is not restricted to ; a simple function can be with its domain being any set.

Remark: a step function is a type of a simple function, i.e. a simple function is called a step function when and each is an interval of real numbers.

Definition: a measurable function is a simple measurable function if it is a simple function.

### Approximating a function by simple function(s)

Let and is the range of the function. If we partition the range as where the subsets are pairwise disjoint, we can construct a simple function approximating as,

where , and . Note that the is a partition of disjoint sets because s are pairwise disjoint. Also, .

To construct the simple function approximation, the range of the function was partitioned and then the domain was implicitly partitioned through the pre image. The reason is that firstly partitioning the real line as the codomain/range can be readly performed. Secondly, this approach is the foundation of Lebesgue integration. Lebesgue integration partitions the range into intervals, then each summand is a number in one of the intervals in the partition times the measure of the preimage of that interval. This makes Lebesgue integration capable of considering functions that are not Riemann integrable and also having nice properties like interchanging limit and integration operators. Any real-valued function can be written as the limit of a sequence of simple functions. For non-negative functions, however, the following theorem exists.

Theorem L1: Let be a non-negative measurable function. Then, there exists a monotonically increasing sequence of non-negeative measurable simple functions (where ) such that (this is a pointwise convergence). The approximation of by is an approximation from bellow, i.e. . Such a sequence can be constructed as (this proves the existence),

where,

This is explained as follows. Regarding each , the codomain of is partitioned as . Then, is devided/partitioned into subintervals such that , i.e.,

The rest of the codomain is considered as regarding an .Then, the preimage () of each subinterval is determined (on the domain) as, and . These preimages partition the domain into pairwise disjoint sets because the partitioning sets of the codomain are disjoint. With these assumptions, each is a simple function. Note that a simple function is bounded.

Proposition L1: Any real-valued function can be approximated by monotonically increasing sequence of non-negeative measurable simple functions if written as where, and . Note that both and are positive and hence satisfies the conditions of Theorem L1.

Proposition L2: if is bounded, then converges uniformely.

### Integration with respect to a measure/ Lebesgue integral

Definition: Let be a measure space and be a bounded measurable function. If , and is a disjoint measurable partition of , i.e. and , define,

Also, if is another disjoint partition of , we write and say is a refinement of if for some .

Then, we can show,

1. .
2. i.e. increasing sequence.
3. i.e. decreasing sequence.
4. . i.e. as refines.

We say is Lebesgue integrable over if . In this case we write,

Remark: can be a set of any objects (numbers, symbols, animals, etc).

#### The Lebesgue integration of a simple measurable function over a set of finite measure

Theorem L2: Let be a measure space and as and a measurable function where , and s are pairwise disjoint. Then, is Lebesgue integrable on iff exists. In this case we write,

For the proof, note that is a (disjoint and measurable) partition of and on each .

In above formulation, the sum is over the size of the partition, i.e and s are sequentially in correspondence with s. We can however write the sum over the size of the range of the simple function. The range of a simple measurable function is a discrete, i.e. countable set, . If the number of elements in (the size of) is and , then,

Note that the range of the function is a set and it indeed contains distict values.

Example: Let be the Lebesgue measure and be defined as,

Then, Lebesgue integration by summing over the size of the partition gives,

And, the integration by summing over the range of the function () gives,

Example: Let be a measure space and,

if , we calculate as follows.

We note that the support of the integral has a finite measure with respect to the defined measure; because, . Therefore, we can write,

Remark: Lebesgue integrability depends on the measure envolved.

Remark: if then for any set , we can write

Proposition [L3]: The follwing properties hold for integration of simple measurble functions over sets of finte measure:

1. .
2. If then .
3. .
4. If on , then .
5. If almost every where on , then .

#### The Lebesgue integration of a bounded measurable function over a set of finite measure

The general definition of Lebesgue integration and the formulation for the Lebesgue integration of a simple function can be used to (re-) define the definition of the Lebesgue integration of a bounded function on a set of finite measure. In this regard, the lower and upper Lebesgue integrals of on with are defined as,

and,

which are bounded and .

in which is a measure on ; the operations Sup and Inf are over all simple functions approximating the function.

By definition, if , we say is Lebesgue integrable over and its integral equals the common value and is denoted as .

Theorem [Lebesgue integribility of functions]: If is a bounded measurable function on a set of finite measure, where , then is Lebesgue integrable, i.e. exists. The converse is also true, stating that, if a bounded function on a set of finite measure is integrable (i.e. exists), then the function is a measurable function.

Proposition L4: Proposition L3 holds for the above afformentioned functions.

Theorem L3: Let be a bounded measurable function on with . If where is at most a countable family of pairwise disjoint measurable sets, then .

An example that can be solved by the above theorem is such that where , and such that and otherwise .

Proposition [Lebesgue and Riemann integrations]: Let be a bounded measurable function where in the Lebesgue measure (n-dimensional interval length). If has a finite number of discountinuity and and for . Then, the following integral exists and,

Lebesgue integration: = Riemann integration:

Riemann integration of a bounded function on a set of finite measure can be regarded as a particular case of the general Lebesgue integration. In fact, Riemann integration is based on subdividing the domain of a function whereas Lebesgue integration is based on subdividing the range of a function and using the inverse image to create measurable subdivision on the domain of the function.

#### The Lebesgue integration of an unbounded measurable function over a measurable set

Riemann integration is defined as the limit of the Riemann sum for bounded functions on bounded domains. For unbounded functions or domains, the Riemann integration is defined as limits:

1- If is continuous (with finite number of discountinuity though) on and unbounded at as , then the follwing Riemann integral is defined provided that the limit on the RHS exists (i.e. being finite),

that is calculating the integral for a bounded function and then taking the limit.

If is unbounded as , we define . And if is unbounded at , then we split the integral at and write the limits.

2- If the domain of integration is unbounded with respect to a variable, then the integration is defined as a limit if exists. For example, if continuous, then,

The above integrations are classified as improper Riemann integrations. Lebesgue integration approaches these unbounded cases in a natural (more general) way. Therefore, the temr improper is not used with these cases of Lebesgue integration.

Definition: A measurable function on a set is of finite support if there is a set for which and on . The support of then becomes the set over which the function does not vanish.

Proposition: Let be a bounded measurable function on with a finite support , then .

Proof: by assumption on . The proposition is already proved If . For the case that , i.e. unbounded, we can write,

Therefore, we can write as a simple function,

by using the formula for Lebesgue integration of a simple function we conclude that,

Remark: in above we showed that even if . Note that we did not write ; this expressin is undefined. But, we used being equal to zero. In other words, one may write, which is undefined. We should note that the theorem on the integration of simple function holds for sets of finite measure.

Proposition: Let be a bounded measurable function on (with finite or infinite measure). If almost everywhere on , then .

To move ahead and define Lebesgue integration for any measurable functions including unbounded ons on any measurable support, non-negative functions are considered first. Considering allows using lower approximations of the function by bounded functions of finite support.

Definition L1 [Lebesgue integration of non-negative functions]: For on , the integration is defined as,

the suprimum is on all functions described as . If the suprimum of the above set of values exists, we say the function is integrable. If value is infnity the integral is defined and is unbounded. Note that is a pointwise expression means either or at each . Also, each is of finite support, meaning that except over some part of the domain with a finite measure, valishes over te rest of its domain.

Above definition can also prove if almost everywhere on , then . To this end let .

Definition L2 [Lebesgue integration of functions]: The Lebesgue integration for any measurable function and a measurable set is defined as,

provided that at least one of the integrals is finite. If the integral equals or the integral is defined however we say the function is integrable if the integral is finite. The case is undefined.

Theorem L4: let be a non-negative simple function and . Then, .

Note that since the sum is always defined, either finite or .

Theorem L5: Let be non-negative. Define for all . Then

(a) is countably additive on . I.e. for and as . Which means,

(b) for any not necessarily positive function, is countably additive on if is integrable on .

proof:

(a) If is a simple function such that , by theorem L4, we can write, . Therefore, by definition L1,

Because and is definitely a non-negative (countably additive) measure, it is clear that for any . Note that . Now, If for some , then and by the above results , therefore, and it is trivial.

So, we assume that , i.e finite. Therefore, for any we can let and find a simple function as such that,

Because is arbitrary, above indicates that . It follows that (by induction if you want), for any ,

And because ,

Therefore (considering the first part of the proof),

(b) If is integrable, then each and exists and the proof of (a) can be applied to each part.

Corollary L1: for sets and such that and , then . This shows that a set of measure zero is negligible in integration.

Proposition: Let be a non-negative bounded measurable function on . If , then almost everywhere on .

Theorem L6: If a measurable function is (Lebesgue) integrable (finite in fact) with respect to a measure on , then is (Lebesgue) integrable on , and .

Proof: Let as a disjoint partition such that and . Then, by theorem L5,

For the second part, since we can write and which implies .

By the above theorem, we see that integribility of implies that of . Because of that, Lebesgue integral is called absolutely convergent integral. It should be noted that Riemann integration is not neccessarily absolutely convergent.

Theorem L7: For a measurable function on , if and is integrable on . Then is integrable on .

Theorem L8 [Lebesgue’s monotone convergence theorem]: For a measurable set , let be a sequence of measurable functions, , such that

Let be defined as the following pointwise convergence,

Then,

Note that the sequence converges to from below. Also may or may not be bounded and hence the integral. for proof see [WR].

Theorem L8: For a measurable set , if is a sequence of nonnegative measurable functions () and

Then,

Note that has to be nonnegative but doesn’t need to be monotone. Proof of this theorem is by noting that the partial sums of the infinite sum form a monotonically increasing sequence and using Theorem L7.

Theorem L9 [Fatou’s theorem]: For a measurable set if is a sequence of nonnegative measurable functions and , then

Note that does not need to be monotone.

From theorem L9, we can conclude that if a measurable function is a limit of a sequence of nonnegative measurable functions , then . This is because for any sequence of functions . If the sequence is monotonically increasing functions then Theorem L8 holds.

Theorem L10 [Lebesgue’s dominated convergence theorem]: for a measurable set , let be a sequence of measurable functions such that . If there exists a measurable function on such that for all and , meaning that is uniformly bounded, then

Corollary L2: If , i.e. finite measure, and is uniformly bounded on , and on , then theorem L10 holds.

[WR] Walter Rudin-Principles of Mathematical Analysis, Third Edition-McGraw-Hill Science Engineering Math (1976)

# Tensors 2

## Tensor as an element of tensor product of vector spaces

Before presenting another way of definition a tensor, we define a notation. A linear map and a bilinear form are respectively written as a linear combination of and . Any of these (for any the dimensions of the corrresponding spaces) can be considered as one new object and denoted as for example. and . The writing of the basis vectors and/or basis covectors adjacent to each other is usually denoted by and . This notation is referred to as tensor product of (basis) vectors. A general definition will be presented later. Using this notation, for now, we can write a linear map and a bilinear form as,

This notation can be extended to be used with any finite linear combination of tensor products of basis vectors and/or covectors where the combination coefficients takes indices following the index level convension. For example we can write,

Let’s define tensor product of vectors and covectors and their rules.

### Tensor product of vectors and covectors

Let , and be vectors or covectors (not necessarily basis ones), the we define the tensor product of each pair as , and etc., and the following rules and operations,

0. Order matters:

1. Scalar multiplication: .

The above rules can be extended to tensor product of any number of vectors or covectors. For example,

1. Scalar multiplication:

The above rules can be recruited to construct vector spaces, called tensor-product vector spaces. For example, if and , then,

Any vector spaces can get into a tensor product. For example, with members like with and .

Note that tensor product of vector space can be done on totally different vector spaces over the same field, e.g. .

#### Basis for a tensor product space

Let and be vector spaces with bases and \{\zeta_j\}_1^m respectively. if and , we can write,

This states that any vector can be written as a linear combination of . Therefore,

1. The set of vectors \} is a basis for the vector space .
2. The dimension of the vector space is .

The above can be extended to tensor product of any number of vector spaces, i.e. the tensor product of the basis vectors of vector spaces creates a basis for the resultant tensor product space.

Example: Let and and .

Then, is a basis for , and,

### Tensor by tensor product

Definition (Tensor-product view): Tensor is a collection of vectors and covectors combined together by using the tensor product (of vectors and/or covectors). A tensor of type (r,s) is a member of the tensor product space,

and written as,

Note that collects the component or the coordinates of the tensor with respect to the basis vectors \{\varepsilon^{i_1}\otimes\cdots\otimes \varepsilon^{i_r}\otimes e_{j_1}\otimes\cdots\otimes e_{j_s}\} or inherently

In this view, a vector is a (0,1) tensor, a covector is a (1,0) tensor. A linear map is a (1,1) tensor. A bilinear form is a (2,0) tensor. A bilinear map is a (2,1) tensor.

# Tensors 1

#### Notations

Einstein summation convention is used here. A matrix M is denoted as and its ij-th element is referred to by . Quantities or coefficients are indexed as for example , or . These indices do not automatically pertain to row and column indices of a matrix, but the quantities can be presented by matrices through isomorphisms once their indices are freely interpreted as rows and columns of matrices.

## Coordinates of a vector

Let be a n-dimensional vector space and with be a basis for . Then, we define the coordinate function as,

such that for a vector written by its components (with respect to ) as the function acts as,

The coordinate function is a linear map.

## Change of basis for vectors

Let and be two basis for , then,

and

where the indices of the scalar terms and are intentionally set this way. So, if all are collected into a matrix , then the sum is over the rows of the matrix for a particular column. In other words, we can utilize the rule of matrix multiplication and write,

The same is true for . In above formulations, note that is a dummy index (i.e. we can equivalently write )

Setting as the initial (old) basis and writing the current (new) basis in terms of is referred to as forward transform denoted by . Relatively, is called backward transform.

The relation between that forward and backward transforms is obtained as follows,

We now find how vector coordinates are transformed relative to different bases. A particular can be expressed by its components according to any of or basis, therefore,

To find the relation between and we write,

As it can be observed, the old basis to new basis is transformed by the forward transform while the old coordinates are transformed to the new ones, , by the backward transform . Because the coordinates of behave contrary to the basis vectors in transformation, the coordinates or the scalar components are said to be contravariant. A vector can be called a contravariant object because its scalar components (coordinates) transforms differently from the basis vectors whose their linearly combination equals to the vector. Briefly,

Proposition: Let . Then, the scalar components/coordinates are transformed by if and only if the basis vectors are transformed by , such that .

Later, a vector is called a contravariant tensor. For the sake of notation and to distinguish between the transformations of the basis and the coordinates of a vector, in index of a coordinate is written as superscript to show it is contravariant. Therefore,

## Linear maps and linear functionals

Definition: is defined as the space of all linear maps where the domain and codomain are vectors spaces.

It can be proved that is a vector space , hence, for and

Note that the addition on the LHS is an operator in and the addition on the RHS is an operator in .

Proposition 1: Let , i.e a linear map from a vector space to another one . If is a basis for , and for and , then is uniquely defined over .

This proposition says a linear map over a space is uniquely determined by its action on the basis vectors of that space. In other words, if and then . proof: let (given by the nature of ), then for such that , we can write , therefore, . Because, ‘s are unique for (a particular) then is unique for and hence must be unique for any . In other word, there is only one unique over such that .

As a side remark, if is a basis for , hence spanning , then spans the range of ; The range of is a subset of .

By this proposition, a matrix completely determining a linear can be obtained for the linear map. let be n-dimensional with a basis , and be m-dimensional with a basis . Then there are coefficients such that,

In the notation , the index is superscript because for a fixed and hence a fixed , the term is the coordinate of and it is a contravariant (e.g ).

For , and , with the coordinates and , we can show that,

This expression can be written as a matrix multiplication of , where presented by its elements as,

As a remark, above can be viewed as columns of the matrix and written as,

### Linear functional (linear form or covector)

Definition: a linear functional on is a linear map . The space is called the dual space of .

Proposition: Let and be defined as . Then, called dual basis of , is a basis of , and hence .

Proof: first we show that ‘s are linearly independent, i.e. . Note that on the RHS, . For a we can write and assume . Then,

Since is arbitrary, ■ .

Now we prove that spans . I.e such that . To this end, we apply both sides to a basis vector of and write which implies or explicitly is found as . Consequently, ■.

Consider and . If , then the matrix of the linear functional/map is

So, for as we can write,

Result: if the coordinates of a vector is shown by a column vector or single-column matrix (which is a vector in the space of ), then a row vector or a single-row matrix represents the matrix of a linear functional.

Definition: a linear functional , which can be identified with a row vector as its matrix, is also called a covector.

Like vectors, a covector (and any linear map) is a mathematical object that is independent of a basis (i.e. invariant). The geometric representation of a vector in (or by an isomorphism in) is an arrow in . For a covector isomorphic to , the representation is a set (stack) of planes in that can be represented by iso lines in . A covector that is isomorphic to can be represented by iso surfaces in .

Example: Let be a basis of and be the matrix of a covector in some . Then, if , we can write,

which, for different values of , is a set of (iso) lines in a Cartesian CS defined by two axes and along and that are the geometric representations of and . The Cartesian axes are not necessarily orthogonal.

If we chose any other basis for , then the matrix of the covector changes. Also, the geometric representations of are different from and and hence the geometric representation of the covector stays the same shape.

Example: Let be a basis of and be a basis for . This means and . Then, the matrix of each dual basis vector is as,

### Change of basis for covectors

Let and be two bases for , and hence, and be two bases for . Each dual basis vector can be written in terms of the (old) dual basis vectors by using a linear transformation as . Now, the coefficients are to be determined as follows,

Using the formula regarding the change of basis of vectors, the above continues as,

This indicates that the dual basis are transformed by the backward transformation. Referring to the index convention, we use subscript for components that are transformed trough a backward transformation. Therefore,

meaning that dual basis vectors are contravariant because they behave contrary to the basis vectors in transformation from to .

Now let . Writing and using the above relation, we get,

meaning that they are transforming in a covariant manner when the basis of the vector space changes from to .

Briefly the following relations have been shown.

### Basis and change of basis for the space of linear maps

As can be proved is a linear vector space and any linear map is a vector. Therefore, we should be able to find a basis for this space. If is n-dimensional and is m-dimensional, the is mn-dimensional and hence its basis should have vectors, i.e. linear maps. Let’s enumerate the basis vectors of as for and , then any linear map can be written as,

By proposition 1, any linear map is uniquely determined by its action on the basis vectors of its codomain. If be a basis for , then for any basis vector ,

Setting a basis for as , the above equation becomes,

This equation holds if,

Therefore, we can choose a set of basis vectors for as,

By recruiting the basis of , the above can be written as,

The term is obviously a linear map . It can be readily shown that being a linear combinations of the derived basis vectors is linearly independent, i.e. for any (here, note that ).

So, a linear map can be written as a linear combination . Here, it is necessary to use the index level convention. To this end, we observe that for a fixed the term couples with and represents the coordinates of a covector. As coordinates of a covector are covariant, index is written as subscript. For a fixed though, the term couples with and represents the coordinates of a vector. As coordinates of a vector are contravariant, index should rise. Therefore, we write,

The coefficients can be determined as,

Stating that are the coordinates of with respect to the basis of , i.e. . Comparing with what was derived as , we can conclude that . Therefore,

The above result can also be derived from as follows.

Change of basis of is as follows.

For , let and be bases for , and and be bases for . Also, and are corresponding bases of . Forward and backward transformation pairs in and are denoted as and .

Note that the coordinates of a linear map need two transformations such that the covariant index of pertains to the forward transformation and the contravariant index pertains to the backward transformation.

Example: let , then,

If the matrices, , , and are considered, we can write,

## Bilinear forms

A bilinear form is a bilinear map defined as . Setting a basis for , a bilinear form can be represented by matrix multiplications on the coordinates of the input vectors. If is a basis for , then

which can be written as,

where with .

The expression indicates that a bilinear form is uniquely defined by its action on the basis vectors. This is the same as what was shown for linear maps by proposition 1. This comes from the fact that a bilinear form is a linear map with respect to one of its arguments at a time.

Now we seek a basis for the space of bilinear forms, i.e. . This is a vector space with the following defined operations.

The dimension of this space is , therefore, for any bilinear form there are bilinear forms such that,

From the result we can conclude that

Following the index level convention, the indices of should stay as subscripts because each index pertains to the covariant coordinates of a covector after fixing the other index.

If and are two bases for , then the change of basis of the space of bilinear forms are as follows.

Example: the metric bilinear map (metric tensor)

Dot/inner product on the vector space over is defined as a bilinear map such that, and . With this regard two objects (that can have geometric interpretations for Euclidean spaces) are defined as,

1- Length of a vector
2- Angle between two vectors

Let see how the dot product is expressed through the coordinates of vectors. With being a basis for , we can write,

The term is called the metric tensor and its components can be presented by an n-by-n matrix as .

If the basis is an orthonormal basis, i.e. , then and is the identity matrix. Therefore, and .

## Multilinear forms

A general multilinear form is a multilinear map defined as , where is a vector space. Particularly setting leads to a simpler multilinear form as .

Following the same steps as shown for a bilinear map, a multilinear form can be written as,

which implies,

showing that a multilinear form can be written as a linear combination of covectors.

## Multilinear maps

A multilinear map is a map . A multilinear map can be written in terms of vector and covector basis. For example, consider as with and being bases for and . We can write,

Because for each and we have , we can write,

We write the indices and in as subscripts in accordance with their position on the LHS; however, we’ll see that is a coordinate of a covector for each or when is fixed. Combining above, we can write,

The term collects coefficients and uniqully defines the multilinear map. We can imagine this term as a 3-dimensional array/matrix. Above also shows that the multilinear map can be written as a linear combination of basis covectors and basis vectors.

## Definition of a tensor

Defining the following terms,

• Vector space and basis and another basis .
• Basis transformation as , and therefore .
• The dual vector space of as .
• Vector space and basis and another basis .
• Basis transformation as , and therefore
• Linear map .
• Bilinear form .

we concluded that,

It is observed that if a vector is written in terms a single sum/linear combination of basis vectors of , then the components of the vectors change contravariantely with respect to a change of basis. Then, the covectors are considered and it is observed that their components change covariently upon change of basis of or . A linear map can be written as a linear combination of vectors and covectors. The coefficients of this combination is seen to change both contra- and covariantely when the bases (of and ) change. A bilinear form though can be written in terms of a linear combination of covectors. The corresponding coefficients change covariantly with change of basis. These results can be generalized toward an abstract definition of a mathematical object called a tensor. There are two following approaches for algebraically denfining a tensor.

### Tensor as a multilinear form

Motivated by how linear maps, bilinear forms, and multilinear forms and maps can be written by combining basis vectors and covectors, a generalized combination of these vectors can considered. For example,

This object consists of a linear combination of a unified (merged) set of basis vector and covectors (of and ) by scalar coefficients . According to the type of the basis vectors, the indices become sub- or superscript, and hence it determines the type of the transformation regarding that index. By reordering the basis vectors and covectors, we can write,

Recalling that vector components can be written as implines that there is a map for a particular vector such that,

And also the components of a covector determined as,

motivates defining as the array (collection of the coefficients) of a multilinear form,

Therefore, the object or the multi-dimensional array which is dependent on chosen bases of and and its transformation rules are based on the types of the beases (or indices) can be intrinsically related to an underlying multilineat map . By virtue of this observation, an object called a tensor is defined as the following.

Definition: A tensor of type (r,s) on a vector space over a field (or ) is a multilinear map as,

The coordinates or the (scalar) components of a tensor can then be determined once a basis for and a basis for are fixed. Therefore,

Note that r is the number of covariant indices and s is the number of contravariant indices. A tensor of type (r,s) can be imagined as an r+s-dimensional array of data containing elements. Each index corresponds to a dimention of the data array.

By this definition, a vector is a (0,1) tensor as it can be viewed as,

This implies that for each there is a (multilinear with one input) map receiving a basis covector and returning the corresponding scalar component of the vector (). This corresponding map is unique for each vector and it is called a tensor.

A covector (dual vector or a linear form) is a (1,0) tensor because a covector is a linear map

َ A linear map (or ) is a (1,1) tensor because the term in pertains to a covector’s components for a fixed and to a vector’s components for a fixed . Therefore,

in other words, a linear map is a tensor viewed as a multilinear map . Here, the multilibear form can be considered as by which gives . Note that returns a vector and being extract its j-th coordinate, and hencem the array/matrix of the linear map is retrieved.

A bilinear form is then a (2,0) tensor, where

A multilinear form is a (2,1) tensor where where can be considered as .

As an example, the cross product of two vectors defined as is a multilinear map is a (2,1) tensor.

By convension, scalars are (0,0) tensors.

Remark: for a tensor we can write,

Example: Stress tensor. The Cauchy stress tensor in mechanics is a linear map and hence a (1,1) tensor.

#### Rank of a tensor

The rank of a (r,s)-type tensor is defined as r+s. In this regard, tensors of different types can have the same rank. For example tensors of types (1,1), (2,0), (0,2) have the same rank being 2. Here we compare these tensors with eachother.

A (1,1) tensor representing a linear map is where with .

A (2,0) tensor representing a bilinear form is where with .

A (0,2) tensor is where with .

The coeffieints of the above tensor are collected in 2-dimensional array/matrix; however, they follow different transformation rules based on their types.