## Gtu Maths - 2(3110015) Papers, Syllabus, Study Material And Example For Examination

**Gtu Maths - 2(3110015) :**

### Syllabus

####
**As Per Old Syllabus (2110015) :**

####
**Chapter 1 : Systems of Linear Equations and Matrices**

**Weightage Of Chapter :**15%

- Systems of Linear Equations
- Matrices and Elementary Row Operations
- The Inverse of a Square Matrix
- Matrix Equations
- Applications of Systems of Linear Equations

####
**Chapter 2 : Linear Combinations and Linear Independence**

**Weightage Of Chapter :**21%

- Vectors in n R
- Linear Combinations
- Linear Independence
- Vector Spaces
- Definition of a Vector Space
- Subspaces
- Basis and Dimension
- Coordinates and Change of Basis

####
**Chapter 3 : Linear Transformations**

**Weightage Of Chapter :**21%

- Linear Transformations
- The Null Space and Range
- Isomorphisms
- Matrix Representation of Linear Transformations
- Similarity
- Eigenvalues and Eigenvectors
- Diagonalization

####
**Chapter 4 : Inner Product Spaces**

**Weightage Of Chapter :**19%

- The Dot Product on n R and Inner Product Spaces
- Orthonormal Bases
- Orthogonal Complements
- Application: Least Squares Approximation
- Diagonalization of Symmetric Matrices
- Application: Quadratic Forms

####
**Chapter 5 : Vector Functions**

**Weightage Of Chapter :**15%

- Vector & Scalar Functions and Fields, Derivatives
- Curve, Arc length, Curvature & Torsion
- Gradient of Scalar Field, Directional Derivative
- Divergence of a Vector Field
- Curl of a Vector Field

####
**Chapter 6 : Vector Calculus**

**Weightage Of Chapter :**19%

- Line Integrals
- Path Independence of Line Integrals
- Green`s Theorem in the plane
- Surface Integrals
- Divergence Theorem of Gauss
- Stokes`s Theorem

####
**As Per New Syllabus (3110015)**

####
**1. Vector Calculus**

**Weightage Of Chapter :**33%

- Parametrization of curves. Arc length of curve in space
- Line Integrals, Vector fields and applications as Work, Circulation and Flux
- Path independence
- potential function, piecewise smooth
- connected domain, simply connected domain, fundamental theorem of line integrals
- Conservative fields, component test for conservative fields, exact differential forms, Div, Curl, Green’s theorem in the plane (without proof)
- Parametrization of surfaces
- surface integrals
- Stoke’s theorem (without proof), Divergence Theorem (without proof)

####
**2. Laplace Transform**

**Weightage Of Chapter :**20%

- Laplace Transform and inverse Laplace transform, Linearity
- First Shifting Theorem (s-Shifting)
- Transforms of Derivatives and Integrals. ODEs
- Unit Step Function (Heaviside Function)
- Second Shifting Theorem (t-Shifting)
- Laplace transform of periodic functions
- Short Impulses, Dime’s Delta Function, Convolution
- Integral Equations, Differentiation and Integration of Transforms
- ODEs with Variable Coefficients, Systems of ODEs

####
**3. Fourier Integral**

**Weightage Of Chapter :**2%

- Fourier Integral, Fourier Cosine Integral and Fourier Sine Integral.

####
**4. First order ordinary differential equations**

**Weightage Of Chapter :**12%

- First order ordinary differential equations
- Exact, linear and Bernoulli’s equations, Equations not of first degree: equations solvable for p, equations solvable for y. equations solvable for x and Clairaut’s type

####
**5. Ordinary differential equations**

**Weightage Of Chapter :**20%

- Ordinary differential equations of higher orders, Homogeneous Linear ODEs of Higher Order
- Homogeneous Linear ODEs with Constant Coefficients
- Euler-Cauchy Equations, Existence and Uniqueness of Solutions
- Linear Dependence and Independence of Solutions
- Wronskian, Nonhomogeneous ODEs, Method of Undetermined Coefficients
- Solution by Variation of Parameters

####
**6. Series Solutions of ODEs**

**Weightage Of Chapter :**13%

- Series Solutions of ODEs
- Special Functions
- Power Series Method, Legendre’s Equation, Legendre Polynomials
- Frobenius Method, Bessel’s Equation
- Bessel functions of the first kind and their properties

### Reference And Text Books Of Maths 1

####
**Advanced Engineering Mathematics**

Author : Erwin Kreyszig

Publisher : John Wiley and Sons

####
**Advanced Engineering Mathematics**

Author : Peter O'Neill

Publisher : Cengage

####
**Advanced Engineering Mathematics**

Author : Dennis G. Zill

Publisher : Jones and Bartlett Publishers

####
**Thomas' Calculus**

Author : Maurice D. Weir, Joel Hass

Publisher : Pearson

### Question Papers Of Old Maths - 2 (2110015)

May 2019May 2018

Dec 2017

May 2017

Jun 2017

### Question Papers Of New Maths - 2 (3110015)

**Search More About GTU :**Here

### Most Imp Topics Of Maths 2

### Linear Dependence And Independence

**Linear Dependence And Independence :**

Let s = {v1,v2,v3…….vr} is a non-empty set of vectors such that

k1v1+k2v2+……..krvr = 0

If the homogeneous system obtained from above equation has only trivial solution

k1 =0 , k2 =0, …. kr = 0

then S is called Linearly independent set.

If the system has non-trivial solution(ex: at least one k is non-zero) then S is called a linearly dependent set. Linearly Dependent if S contains zero vector as 0 = 0v1 + 0v2 +…..+0vr

Linearly Dependent if S contains zero vector as 0 = 0v1 + 0v2 +…..+0vr

####
**Example :**

**Which of the following sets of vectors are linearly dependent ?**

(1) (4,-1,2), (-4,10,2), (4,0,1)

(2) (-2,0,1), (3,2,5), (6,-1,1), (7,0,-2)

####
**Solution :**

Let v1 = (4,-1,2), v2 = (-4,10,2), v3 = (4,0,1)

Consider k1v1+k2v2+k3v3 = 0

k1(4,-1,2) + k2(-4,10,2) + k3(4,0,1) = (0,0,0)

(4k1 -4k2+4k3 , -k1 +10k2,2k1+2k2+ k3 ) = (0,0,0)

Equating corresponding components,

4k1 -4k2+4k3 = 0

-k1 +10k2 = 0

2k1+2k2+ k3 = 0

The augmented matrix of the system is

Reducing the augmented matrix to reduced row echelon form,

Hence,

k1 =0 , k2 =0, k3 = 0

The system has a trivial solution.

Hence,

v1,v2,v3 are linearly independent.

(2) Let v1 = (-2,0,1), v2 = (3,2,5), v3 = (6,-1,1), v4 = (7,0,-2)

Consider k1v1+k2v2+k3v3 +k4v4= 0

k1(-2,0,1) + k2(3,2,5) + k3(6,-1,1) + k4(7,0,-2) = (0,0,0)

(-2k1 + 3k2 + 6k3 + 7k4 , 2k2 – k3 ,k1+5k2+ k3 – 2k4 ) = (0,0,0)

Equating Corresponding Components,

-2k1 + 3k2 + 6k3 + 7k4 = 0

2k2 – k3 = 0

k1+5k2+ k3 – 2k4 = 0

The number of unknowns, r = 4

The number of equations, n = 3

r > n

Hence, v1,v2,v3. v4 are linearly dependent.

### Types of matrix :

####
**(1) Row Matrix :**

Row matrix or row vector is a matrix having only one row and any number of columns.

[ 2 5 -3 4 ]

####
**(2) Column Matrix :**

column matrix or column vector is a matrix, having only one column and any number of rows.

####
**(3) Zero Or Null Matrix :**

Zero matrix is a Matrix in which all the elements are zero or null matrix and is denote by 0.

####
**(4) Square Matrix :**

A square matrix is in which the number of rows is equal to the number of columns.

####
**(5) Diagonal Matrix :**

A square matrix, all of whose non-diagonal elements are zero and at least one diagonal element is non-zero, is called a diagonal matrix.

####
**(6) Unit Or Identity Matrix :**

Unit or identity matrix is a diagonal matrix, all of whose diagonal elements are unity and is denoted by I.

####
**(7) Scalar Matrix :**

Scalar matrix is a diagonal matrix, all of whose diagonal elements are equal.

####
**(8) Upper Triangular Matrix :**

Upper triangular matrix is a square matrix, in which all the element below the diagonal are zero.

####
**(9) Lower Triangular Matrix :**

Lower triangular matrix is a square matrix, in which all the elements above the diagonal are zero.

####
**(10) Trace of a Matrix :**

Trace of a matrix is the sum of all the diagonal elements of a square matrix .

And

Trace of A = 2 + 6 + 3 = 11

####
**(11) Transpose of a Matrix :**

Transpose of a matrix is a matrix obtained by interchanging rows and columns of a matrix and is denoted by AT.

####
**(12) Determinant of a Matrix :**

If A is a square matrix, then determinant of A is represented as |A| or det(A).

####
**(13) Singular and Non-singular Matrices :**

A square matrix A called singular if det(A) = 0 and non-singular if det(A) ≠ 0.

### Elementary Row Operations :

####
**Echelon Form of a Matrix :**

A matrix A said to be in row echelon form if it satisfies the following properties :

(1) Every zero row of the matrix A occurs below a non-zero row.

(2) The first non-zero number from the left of a non-zero row is 1. This is called a leading 1.

(3) For each non-zero row, the leading 1 appears to the right and below any leading 1 in the preceding rows.

####
**Types Of Echelon Form :**

(1) Row Echelon Form

(2) Reduced Row Echelon Form

**You Can Also Read :**Scope Of Electronics Engineering

### First Shifting Theorem :

First shifting theorem is used when we have to find out laplace transform of function which have eat or e-at multiply with function.

It is simple if we have eat multiply with the function then replace s with (s – a ) in the equation.

It is simple if we have e-at multiply with the function then replace s with (s + a) in the equation.

If L{f(t)} = F(s) then,

####
**First Shifting Theorem**

Above both equations shows when we have to use F(s+a) and when we have to use F(s-a).

There are some basic laplace transform of first shifting.

So, let’s see basic equations when function is multiply with eat or e-at.

####
**Example :**

####
**Answer :**

First we have to forget e-3t and find laplace of the remaining function of the equation.

Then, we have to replace s with (s+3) because we have e-3t and according to the equation if we have e-at then replace it with (s+a).

**You Can Also Read :**Do You Know About Amplitude Modulation ?####
**Example **:

####
**Solution :**

As we learn in above example first we have to find laplace of the function except e-3t.

After that replace s with (s+3) according to the equation.

At the end simplify the solution and get the answer.

In this article we learn how to find laplace of the function when eat or e-at is multiplied with the function.

### Linear Transformations

Let V and W be two vector spaces. A linear transformation ( T : V -> W) is a function T from V to W such that

**(a) T(u+v) = T(u) + T(v)**

**(b) T(ku) = kT(u)**

for all vectors u and v in V and all scalars k.

If V = W, the linear transformation T : V -> V is called a linear operator.

####
**Example :**

Show that the following functions are linear transformations.

(1) T : R2 -> R2 where T(x,y) = (x+2y , 3x-y)

(2) T : R3 -> R2 where T(x,y,z) = (2x-y+z , y-4z)

####
**Solution :**

Let u = (x1,y1) and v = (x2,y2) be the vectors in R2 and k be any scalar.

T(u) = (x1+2y1 , 3x1 – y1)

T(v) = (x2+2y2 , 3x2-y2)

(1) u + v = (x1,y1) + (x2,y2)

= (x1+ x2 , y1 + y2 )

T( u + v ) = (x1 + x2 + 2y1 + 2y2 , 3×1 + 3×2 – y1 – y2)

= (x1 + 2y1 , 3×1 – y1) + (x2 + 2y2 , 3×2 – y2)

= (x1 + 2y1 + x2 + 2y2 , 3×1 – y1 + 3×2 – y2 )

thus, = T(u) + T(v)

(2) ku = k(x1,y1) = (kx1,ky1)

T(ku) = (kx1 + 2ky1 , 3kx1 – ky1 )

= k (x1 + 2y1 , 3x1 – y1 )

### Basis And Dimension Of Homogeneous system:

Basis And Dimension For solution space of the Homogeneous systems.

The Basis and Dimensions for the solution space of this system can be found as follows :

(1) Solve system using Gauss Elimination Method.

If the system has only trivial solution space is {0}.

Which has no basis and hence the dimension of the solution space is zero.

(2) If Solution has arbitrary constants t1,t2…tn and express x as a linear combination of vectors x1,x2…xn with t1,t2…tn as coefficients.

Ex : x = t1x1 + t2x2 +…tnxn

(3) The set of vectors x1,x2…xn from a basis for the solution space of Ax = 0 and hence the dimension of the solution space is n.

####
**Example :**

Determine the dimension and a basis for the solution space of the system

x1 + x2 -2x3 = 0

-2x1 -2x2 + 4x3 = 0

– x1 – x2 + 2x3 = 0

####
**Solution :**

The Matrix form of the system is

The augmented matrix of the system is

Reducing The augmented matrix to row echelon form,

The corresponding system of equation is

x1 + x2 – 2x3 = 0

Solving for the leading variables,

x1 = – x2 + 2x3

Assigning the free variables x2 and x3 arbitrary values t1 and t2 respectively,

x1 = – t1 + 2t2

x2 = t1

and x3 = t2 is the solution of the system.

The solution vector is

Hence,

### Linear Combination Of Vectors

**Linear Combination Of Vectors :**

A Vector v is called a linear combination of vectors v1,v2,…..vr, it is expressed as

v=k1v1+k2v2+……..krvr

where k1,k2,….kr are scalars.

Note : If r =1 then v =k1v1 , This shows that a vector v is a linear combination of a single vector v1 , if it is a scalar multiple of v1 .

Vector Expressed as a Linear Combination Of Given Vectors:

Method For check if a vector v is a linear combination of given vectors

v1,v2,…..vr is as follows:

(1) Express v as linear combination of v1,v2,…..vr

v=k1v1+k2v2+……..krvr

(2) If the system of equations in (1) is consistent then v is a linear combination of v1,v2,…..vr .

If it is inconsistent, then v is not a linear combination of v1,v2,…..vr .

**You Have To Read :**How LED Works ?####
**Example :**

Which of the following are linear combinations of v1 = (0,-2,2) and v2 = (1,3,-1) ?

(1) (3,1,5)

####
**Solution :**

Let v = k1v1+k2v2

(1) (3,1,5) = k1(0,-2,2) + k2(1,3,-1)

= (0,-2k1,2k1) + (k2,3k2,-k2)

= (k2 , -2k1+3k2 , 2k1-k2)

Equating corresponding components,

k2 = 3

-2k1+3k2 = 1

2k1-k2 = 5

The augmented matrix of the system is

Reducing the augmented matrix to row echelon form,

Interchange Row 1 and Row 2.

multiply Row 1 with -1/2.

Subtract two times row 1 from row 3.

subtract two times row 2 from row 3.

The system of equation is consistent.

Hence, v is a linear combination of v1 and v2.

The corresponding system of equation is

k1 – (3/2)k2 = -(1/2)

k2 = 3

Solving these equations,

k1 = 4 ,

k2 = 3

**Hence, v = 4v1+3v2**

### Subspaces In Vector Space

**Subspaces In Vector Space :**

A non-empty subset W of a vector space V is called a subspace of V if W is itself a vector space under the operations defined on V.

**Note :**

Every vector space has at least two subspaces, itself and the subspace {0}.

The Subspace {0} is called the zero subspace consisting only of the zero vector.

**You Can Also Read :**How To Build A Portable Basic Solar System####
**Theorem :**

If W is a non-empty subset of vector space V, then W is a subspace of V if and only if the following axioms hold.

**Axiom 1 :**If u and v are vectors in W then u + v is in W.

**Axiom 2 :**If k is any scalar and u is a vector in W, then ku is in W.

####
**Example :**

Show that W = { (x,y) | x = 3y } is a subspace of R2. State all possible subspaces of R2.

####
**Solution :**

Let u = { (x1,y1) | x1 = 3y1 } and v = { (x2,y2) | x2 = 3y2 } are in W and k is any scalar.

**Axiom 1 :**

u + v = (x1,y1) + (x2,y2)

= (x1 + x2 , y1 + y2)

But , x1 = 3y1 and x2 = 3y2

Therefore, x1 + x2 = 3 (y1 + y2)

u + v = { (x1 + x2 , y1 + y2) | x1 + x2 = 3 (y1 + y2) }

Thus, u + v is in W.

**Axiom 2 :**

ku = k (x1,y1)

= (kx1,ky1)

But, x1 = 3y1

Therefor, kx1 = 3(ky1)

ku = { (kx1,ky1) | kx1 = 3(ky1) }

Thus, ku is in W.

Hence, W is a subspace of R2.

All possible subspaces of R2 are

**(1) {0} (2) R2 (3) Lines passing through the origin.**

**You Also Like To Read :**Transparent Solar Panel

### Vector Spaces

**Vector Spaces :**

If the following axioms are satisfied with by all objects u,v,w in V and all scalars k1,k2 then V is called a vector space.

The objects in V are vectors,

(1) If u and v are objects in V then u + v in V.

(2) u + v = v + u

(3) u + (v + w) = (u + v) + w

(4) u + 0 = 0 + u = u

(5) u + (-u) = 0

(6) If k1 is any scalar and u is an object in V, then k1u is in V.

(7) k1(k2u) = (k1k2)u

(8) k1(u+v) = k1u + k1v

(9) (k1+k2)u = k1u + k2u

(10) 1u = u

####
**Example :**

Determine whether the set V of all pairs of real numbers (x,y) with the operations (x1,y1) + (x2,y2) = (x1+x2+1 , y1+y2+1) and k(x,y) = (kx,ky) is a vector space.

####
**Solution :**

**(1) u + v = (x1,y1) + (x2,y2) = (x1+x2+1 , y1+y2+1)**

Since x1,y1,x2,y2 are real numbers x1+x2+1 and y1+y2+1 are also real numbers.

Therefore, u+v is also an object in V.

**(2) u + v = (x1+x2+1 , y1+y2+1)**

= (x2+x1+1 , y2+y1+1)

= v + u

**(3) u + (v + w) = (x1,y1) + [ (x2,y2) + (x3,y3) ]**

Hence, vector addition is associative.

**(4) Let (a,b) be an object in V such that**

Also, u + (a,b) = u

Hence, (-1,-1) is the zero vector in V.

**(5) Let (a,b) be an object in V such that**

Also, (a,b) + u = (-1,-1)

Hence, (-x,-2,-y,-2) is the negative of u in V.

**(6) k1u = k1(x1,y1)**

= (k1x1,k1y1)

Since k1x1 and k1y1 are real numbers, k1u is an object in V.

Hence, V is closed under scalar multiplication.

**(7) k1 (u+v) = k1 (x1+x2+1 , y1+y2+1)**

V is not distributive under scalar multiplication.

The best site to learn math is GTU Engineering Maths ought to be one that incorporates simple to adhere to guidelines, shows a greater number of pictures as opposed to content, and gives the students a chance to understand and learn easy and very well. I’ve skirted any site that concentrates a great deal on hypothesis and history, as I trust it is more imperative to rehearse with maths instead of finding out about Engineering Maths.

## 0 comments: