Monday, August 16, 2021

Routh- Hurwitz Criterion | Stable System | marginally stable | Unstable system | control system

Routh- Hurwitz Criterion:

Stable System: 

  If all the roots of the characteristic equation lie on the left half of the 'S' plane then the system is said to be a stable system.

Marginally Stable System: 

  If all the roots of the system lie on the imaginary axis of the 'S' plane then the system is said to be marginally stable.

Unstable System: 

  If all the roots of the system lie on the right half of the 'S' plane then the system is said to be an unstable system.


Statement of Routh-Hurwitz Criterion:

  Routh Hurwitz criterion states that any system can be stable if and only if all the roots of the first column have the same sign and if it does not has the same sign or there is a sign change then the number of sign changes in the first column is equal to the number of roots of the characteristic equation in the right half of the s-plane i.e. equals to the number of roots with positive real parts.


Necessary but not sufficient conditions for Stability:

  We have to follow some conditions to make any system stable, or we can say that there are some necessary conditions to make the system stable.


Consider a system with characteristic equation:


1. All the coefficients of the equation should have the same sign.

2. There should be no missing term.


If all the coefficients have the same sign and there are no missing terms, we have no guarantee that the system will be stable. For this, we use Routh Hurwitz Criterion to check the stability of the system. If the above-given conditions are not satisfied, then the system is said to be unstable. This criterion is given by A. Hurwitz and E.J. Routh.


Advantages of Routh- Hurwitz Criterion:

1. We can find the stability of the system without solving the equation.

2. We can easily determine the relative stability of the system.

3. By this method, we can determine the range of K for stability.

4. By this method, we can also determine the point of intersection for root locus with an imaginary axis.

Limitations of Routh- Hurwitz Criterion:

1. This criterion is applicable only for a linear system.

2. It does not provide the exact location of poles on the right and left half of the S plane.

3. In case of the characteristic equation, it is valid only for real coefficients.

The Routh- Hurwitz Criterion:

Consider the following characteristic Polynomial

When the coefficients a0, a1, ......................an are all of the same sign, and none is zero.


Step 1: Arrange all the coefficients of the above equation in two rows:

Step 2: From these two rows we will form the third row:

Step 3: Now, we shall form fourth row by using second and third row:

Step 4: We shall continue this procedure of forming a new rows:


Example

Check the stability of the system whose characteristic equation is given by


s⁴ + 2s³+6s²+4s+1 = 0


Solution

Obtain the arrow of coefficients as follows


Since all the coefficients in the first column are of the same sign, i.e., positive, the given equation has no roots with positive real parts; therefore, the system is said to be stable.

Labels: , , , , ,

Sunday, August 15, 2021

Time domain specifications | Delay Time | Rise Time | Peak Time | Maximum Peak | Settling Time | Steady State error | control system

Time Domain Specifications:

  All the time domain specifications are represented in this figure. The response up to the settling time is known as transient response and the response after the settling time is known as steady state response.



  The performance of the control system are expressed in terms of transient response to a unit step input because it is easy to generate initial condition basically are zero.

Following are the common transient response characteristics:

1. Delay Time.
2. Rise Time.
3. Peak Time.
4. Maximum Peak.
5. Settling Time.
6. Steady State error.


Delay Time:

  The time required for the response to reach 50% of the final value in the first time is called the delay time.

Rise Time:

  The time required for response to rising from 10% to 90% of final value, for an overdamped system and 0 to 100% for an underdamped system is called the rise time of the system.

Peak Time:

The time required for the response to reach the 1st peak of the time response or 1st peak overshoot is called the Peak time.

Maximum overshoot:

  The difference between the peak of 1st time and steady output is called the maximum overshoot. It is defined by


Settling Time (ts):

  The time that is required for the response to reach and stay within the specified range (2% to 5%) of its final value is called the settling time.

Steady State Error (ess):

  The difference between actual output and desired output as time't' tends to infinity is called the steady state error of the system.

Example - 1:

When a second-order system is subjected to a unit step input, the values of ξ = 0.5 and ωn = 6 rad/sec. Determine the rise time, peak time, settling time and peak overshoot.

Solution:

Given-
ξ = 0.5 ω n = 6 rad/sec 
 
Rise Time:

Peak time: 

Settling Time:

Maximum overshoot:



Labels: , , , , , , ,

Transfer Function | Poles and Zeros of a Transfer Function | Control System

Transfer Function: 


The transfer function of a system is defined as the ratio of Laplace transform of output to the Laplace transform of input where all the initial conditions are zero.


Where,

1. T(S) = Transfer function of the system.  
2. C(S) = output.  
3. R(S) = Reference output.  
4. G(S) = Gain.  

Steps to get the transfer function:

Step 1: Write the differential equation.


Step 2: Find out Laplace transform of the equation assuming 'zero' as an initial condition.

Step 3: Take the ratio of output to input.

Step 4: Write down the equation of G(S) as follows -


Here, a and b are constant, and S is a complex variable

Characteristic equation of a transfer function:

  Here, the characteristic equation of a linear system can be obtained by equating the denominator to the polynomial of a transfer function is zero. Thus the characteristic equation of the transfer function of Eq.1 will be:


Poles and Zeros of a transfer function:

  Consider the Eq. 1, the numerator and denominator can be factored in m and n terms respectively:
Where,is known as the gain factor and 's' is the complex frequency.

Poles

  Poles are the frequencies of the transfer function for which the value of the transfer function becomes zero.

Zeros

  Zeros are the frequencies of the transfer function for which the value of the transfer function becomes zero.

We will apply Sridharacharya method to find the roots of poles and zeros -
If any poles or zeros coincide then such poles and zeros are called multiple poles or multiple zeros.

If the poles and zeros do not coincide then such poles and zeros are called simple poles or simple zeros.

For example-
Find the transfer function of the following function
The zeros of the function are S = -3 and the poles of the function are S = 0, S = -2, and multiple poles at S = -4 i.e. the pole of order 2 at S = -4.

Labels: , , ,

Saturday, August 14, 2021

Linear and non Linear Control Systems:


Linear Control Systems:

 In order to understand the linear control system, we should first understand the principle of superposition. The principle of superposition theorem includes two important properties and they are explained below:

Homogeneity: A system is said to be homogeneous, if we multiply input with some constant A then the output will also be multiplied by the same value of constant (i.e. A).

Additivity: Consider we have a system S and we are giving the input to this system as a1 for the first time and we are getting the output as b1 corresponding to input a1. On the second time we are giving input a2 and correspond to this we are getting the output as b2.
Now consider this time we are giving input as a summation of the previous inputs (i.e. a1 + a2) and corresponding to this input consider we are getting the output as (b1 + b2) then we can say that system S is following the property of additivity. Now we are able to define the linear control systems as those types of control systems which follow the principle of homogeneity and additivity.

Examples of Linear Control System:

Consider a purely resistive network with a constant DC source. This circuit follows the principle of homogeneity and additivity. All the undesired effects are neglected and assuming ideal behavior of each element in the network, we say that we will get linear voltage and current characteristic. This is the example of a linear control system.

Non-linear Control Systems:

We can simply define a nonlinear control system as a control system which does not follow the principle of homogeneity. In real life, all control systems are non-linear systems (linear control systems only exist in theory). The describing function is an approximate procedure for analyzing certain nonlinear control problems.

Examples of Non-linear System

A well-known example of a non-linear system is a magnetization curve or no load curve of a DC machine. We will discuss briefly no-load curve of DC machines here: No load curve gives us the relationship between the air gap flux and the field winding mmf. It is very clear from the curve given below that in the beginning, there is a linear relationship between winding mmf and the air gap flux but after this, saturation has come which shows the nonlinear behavior of the curve or characteristics of the nonlinear control system.
Linear and nonlinear control system, linear systems, nonlinear control system, control system, electronics engineering


Labels: , , , ,

Continuous Systems and Discrete Systems:

Continuous Systems:- 

 Continuous systems are those types of systems in which input and output signals are the same at both the ends. In this type of system, variable changes with time and any type of variation is not found in the input and output signal. In response to the input signal, a continuous system generates an output signal. 

 

Consider, we have two variable x and y and both vary with time. Continuous signals are represented within parenthesis().


Discrete Systems,Continuous and Discrete Systems,Control System,continuous Systems,electronics engineering,

Discrete Systems:-

In discrete systems, both input and output signals are discrete signals. The variables in the discrete systems vary with time. In this type of system, the changes are predominantly discontinuous. The state of variables in discrete system changes only at a discrete set of points in time.


The variables in the discrete system are x and y and they are always presented in square brackets [ ].

 

Discrete Systems,Continuous and Discrete Systems,Control System,continuous Systems,electronics engineering,

Labels: , , , ,

Friday, August 13, 2021

Comparison between Open Loop Control System and Closed Loop Control System:

Comparison Chart:-

Basis For ComparisonOpen Loop SystemClosed Loop System
DefinitionThe system whose control action is free from the output is known as the open loop control system.In closed loop, the output depends on the control action of the system.
Other NameNon-feedback SystemFeedback System
ComponentsController and Controlled Process.Amplifier, Controller, Controlled Process, Feedback.
ConstructionSimpleComplex
ReliabilityNon-reliableReliable
AccuracyDepends on calibrationAccurate because of feedback.
StabilityStableLess Stable
OptimizationNot-PossiblePossible
ResponseFastSlow
CalibrationDifficultEasy
System DisturbanceAffectedNot-affected
LinearityNon-linearLinear
ExamplesTraffic light, automatic washing machine, immersion rod, TV remote etc.Air conditioner, temperature control system, speed and pressure control system, refrigerator, toaster.

Labels: , , , ,

Thursday, August 12, 2021

Control System | open loop system | closed loop system

Control System:-

 A control system manages, commands, directs, or regulates the behavior of other devices or systems using control loops. It can range from a single home heating controller using a thermostat controlling a domestic boiler to large industrial control systems which are used for controlling processes or machines.

closed Loop Control system,Control System,electronics engineering,open Loop Control system,

Types of Control System:-

1. Open Loop Control System.

2. Closed Loop Control System.


1. Open Loop Control System:-

 In an open-loop control system, the control action from the controller is independent of the process variable. An example of this is a central heating boiler controlled only by a timer. The control action is the switching on or off of the boiler. The process variable is the building temperature. This controller operates the heating system for a constant time regardless of the temperature of the building.

 Another example of a control system is a light bulb in our house. When we switch on the switch of the bulb then it gets glow and when we switch off the switch of the bulb then it gets off.

2. Closed Loop Transfer Function:-

 In a closed-loop control system, the control action from the controller is dependent on the desired and actual process variable. In the case of the boiler analogy, this would utilise a thermostat to monitor the building temperature, and feed back a signal to ensure the controller output maintains the building temperature close to that set on the thermostat.

 A closed loop controller has a feedback loop which ensures the controller exerts a control action to control a process variable at the same value as the setpoint. For this reason, closed-loop controllers are also called feedback controllers.

 Another example of closed loop control system is Smart air conditioner. When you set the temperature as you want (for example 22°C) and when room temperature reaches your desired temperature (i.e. 22°C) the Smart Air conditioner will get off automatically. 

closed Loop Control system,Control System,electronics engineering,open Loop Control system,





Labels: , , ,