Introduction: Amortized analysis is technique to findaverage time required to perform a sequence ofData-structure operations when a set of inputseffects the next sequence of input we use amortized analysis.

We assume that asequence of operation can cost higher than the total cost. Amortized analysisis not applicable everywhere it is applied whenever the sequence in one problemeffecting the other sequence in that problem. For calculating the worst-caserunning time of any kind of data-structure operation in the sequence, p; if thesequence contains m operations, then mp is an upper bound for the total runningtime. This is technically correct but it may come with a surprisingly outputand give a wrong or unexpected result. Now from above paragraph we concludedthe main ideas of our topic is: Amortized analysis applied when a sequence ofdata structure is effecting next sequence.

Amortized analysis is itself anupper bound which results as an average performance for each operation in worstcase time.Amortized analysis is exercised with the whole cost of sequence of operations. It doesnot deals with the cost of a single operation in that sequence. Let’ssuppose thatthe amortized cost of insertion into a splay tree with m items is O(log m), so when Iinsert ’54’ into this tree, the cost will be O(log m). In fact, inserting ’54’ might require O(m) operations.It is only appropriate to say, when I insertm items into a tree, the average time foreach operation will be O (log m).History:Amortization in financemeans to pay off a debt i.

e. loans andmortgage, by smaller payments made over a period of time. A method aggregate analysis which is now known asAmortized analysis is a technique which was introduced by Robert Tarjan.According to him and as he wrote in 1985 in his paper that we can surprisinglyachieve upper bound and lower bound for many varieties of algorithm. Thistechnique is used by many researchers. Robert Tarjan reveals that by using thistechnique we can obtain “self-adjusting” data structures that are efficient butmust have amortized complexity low. Amortization plays a vital role in the analysis of many other standard algorithms and datastructures, including· maximum flow (In optimization theory, maximum flow problems involvefinding a feasible flow through a single-source, single-sink flownetwork that is maximum)· Fibonacci heaps In a field of computer science,isa data structure for priorityqueue operations, consisting of a collectionof heap-ordered trees.

It has abetter amortized running time thanmany other priority queue data structures including the binary heap and binomial heap. Michael L. Fredman and RobertE. Tarjan developed Fibonacci heaps in 1984and published them in a scientific journal in 1987. They named Fibonacci heapsafter the Fibonacci numbers, which areused in their running time analysis.

· Dynamic arrays In a field of computer science, is a array that is growable array, resizablearray, dynamic table, mutable array, or array list is a randomaccess, variable-size list datastructure that allows elements to be added orremoved. It is supplied with standard libraries in many modern mainstreamprogramming languages. Dynamic arrays overcome a limit of static arrays, which have a fixed capacity that needs to be specifiedat allocation.Comparison to other analysis techniques:Worst-case analysis can give overly negative(pessimistic) bounds for sequences ofoperations, because this type of analysis could not tacklewith the operations on same kind of datastructure. Amortized analysis may result into morethan the practical worst-case bound by observing such kindof interactions into account. The bound that is givenby amortized analysis is, a worst-casebound on the average time peroperation; a specific operation in a sequence ofdata-structure may have more costthan this bound, but the minimum cost on whole ofthe operations in any valid sequence will always performwithin the bound. Amortized analysis is same as average caseanalysis, however it is for average cost over a sequence of operations.

Methodsof Amortized analysis:We are going to discuss three methods:· Aggregate method · Potential method· Accounting methodAggregateMethod:The aggregate method, where the all over running time for asequence of operations is analyzed.Ø In aggregate analysis,we will proof that for every m, a sequence of m operations takes Worst-case time T (m) intotal. Ø In the worstcase, the average cost, or amortized-cost, per operation is therefore T(m)/m.Note that this amortized cost applies to each operation, even when there areseveral types of operations in the sequence.

The other two methods are:Ø The accountingmethodØ The potentialmethodStackoperations: In our first example wewill discuss to insert a new element we can do two operation push and pop letassume the cost in O(1).PUSH.S: x pushes object x ontostack S.

POP.Spopsthe top element/object of stack S and will returnback a popped object. Now make sure that you are not calling pop on an emptystack.

Ø What will happenif you call pop operation on empty stack? Surely, it will give us anerror.Since every operation that we canimplement on stack takes O (1) time. The total cost of a sequence of number ofPUSH and POP operations is therefore n, and the actual running time for numberof operations is Q (n).Now let we add one morefunctionality/operation in stack operation of MULTIPOP (S,k),where S is stack and k is a number of objects. MULTIPOP (S,k) performwill do some simple task it will removes the k top objects ofstack S, it will result as empty stack if theobjects are fewer than k. For sure, we will always suppose that k ispositive number negative can never ever be taken if this happens the MULTIPOP operationwill not change the stack and leaves the stack same as it was previous.

In thefollowing pseudocode, the operation STACK-EMPTY returns TRUE if there are noobjects currently on the stack, and FALSE otherwise. The top 4 objectsare popped when we call MULTIPOP(S,4). The nextoperation is MULTIPOP(S,8) which empties the stack since therewere fewer than 8 objects remaining.

According to you what in therunning time of Multipop(S,k)? The time analysis is linear for numbers of POP operationsexecuted, hence we can assume that the time analysis of multipop for every PUSHand POP operation is 1. The while loop will runtill stack is not empty and k is not equal zero that only the when we callmultipop with higher number even than it is working for each iteration of whileloop pop operation will be called as you can see in line 2. Thus, the totalcost of MULTIPOP is Q(sk),now if we recall the amortized analysis we can seeclearly the function give us linear time as whole but a part of sequence isexpensive than whole. Let us consider a sequence ofnumbers of PUSH, POP, and MULTIPOP operations on an empty stack.

The worst-casecost of a MULTIPOP operation in the sequence s O(n), since the stack size is minimumupto n. The worst-case time of any stack operation is therefore O(n), and hencea sequence of n operations costs O(n2), since we may have O(n)MULTIPOP operations costing O(n) each. Although this analysis is correct, theO(n2) result, which we obtained by considering the worst-case costof each operation on each functionality, is not tightly bound.

Using aggregateanalysis, we can obtain a better upper bound that considers the entire sequenceof n operations. In fact, although a single MULTIPOP operation can beexpensive, any sequence of n PUSH, POP, and MULTIPOP operations on an initiallyempty stack can cost at most O(n).We can pop each object from the stack at mostonce for each time we have pushed it onto the stack.

Therefore, the number oftimes that POP can be called on a nonempty stack, including calls withinMULTIPOP, is at most the number of PUSH operations, which is at most n. For anyvalue of n, any sequence of n PUSH, POP, and MULTIPOP operations takes a totalof O (n) time. The average cost of an operation is O(n)/n= O(1). In aggregateanalysis, we assign the amortized cost of each operation to be the averagecost. In this example, therefore, all three stack operations have an amortizedcost of O(1).

We emphasize again that although we have just shown that theaverage cost, and hence the running time, of a stack operation is O(1). Weactually showed a worst-case bound of O(n) on a sequence of noperations. Dividing this total cost by n yielded the average cost peroperation, or the amortized cost.Incrementing a binary counter:The worst case runningtime occurs when all i bits are flipped, so increment (A) has running time O(i).In a sequence of n increment operations, few increments will cause that manybits to flip. Indeed, bit 0 flips with every increment bit 1 flips with every2nd increment bit 2 flips with every 4th increment Total number of bitflips in n increment operations is n + n/2 + n/4 + … +n/2i < n(1/(1-1/2))= 2n So total cost of thesequence is O(n).

Amortized cost per operation is O (n)/n = O(1)The cost of eachINCREMENT operation is linear in the number of bits flipped. As with the stackexample, aanalysis yields a bound that is correct but not tight. The Accounting Method: The accounting method is a methodof amortized analysis based on accounting. Note, however, that thisdoes not guarantee such analysis will be immediately obvious; often, choosingthe correct parameters for the accounting method requires as much knowledge ofthe problem and the complexity bounds one is attempting to prove as the othertwo methods.,different operations are assigned with different charges, with some operationscharged more or less than they actually cost. We call the amount we charge anoperation its amortized cost.

When the amortized cost of anoperation exceeds its actual cost, we assign the difference to specific objectsin the data structure as credit. Credit can help pay for later operations whenactual cost is greater than amortized cost. Our credit must never be negative not justbecause banks use this but because if we let the credit goes negative, we can’tguarantee that our amortized cost is upper bound on an actual cost for somesequences.

So we want a guarantee that for any sequence of operations, theamortized cost always gives an upper bound on the actual cost of that sequence. Thus, we can view the amortized cost of anoperation as being split between its actual cost and credit that is eitherdeposited or used up. Different operations may have different amortized costs.This method differs from aggregate analysis, in which all operations have thesame amortized cost. Table Resizing In data structureslike dynamic array, heap, stack, hash table (to name a few) we used the idea ofresizing the underlying array, when the current array was either full orreached a chosen threshold. During resizing we allocate an array twice as largeand move the data over to the new array. So, if an array is full, the cost ofinsertion is linear; if an array is not full, insertion takes a constant time.

To derive an amortized cost we start with an array of size 1 and perform ninsertions. We must choose the amortizedcosts of operations carefully. If we want to showthat the average cost per operation of the worst case is small by analyzingwithamortized costs, we must guarantee that for any sequence of operations, theamortized cost always gives an upper bound on the actual cost of thatsequence.. Moreover,as in aggregate analysis, this relationship must hold for all sequences ofoperations. If we denote the actual cost of the ith operation by ci and theamortized costof the ith operation by c yi, we require that the submission of amortized costc yi is greater or equal to actual cost which is ci for all sequences.

In somecases, the actual cost is greater than amortized cost. Stack operationsTo illustrate the accountingmethod of amortized analysis, let us return to the stackexample. Recall that the actualcosts of the operations werePUSH 1 ,POP 1 ,MULTIPOP min(k, s) where k isthe argument supplied to MULTIPOP and s is the stacksize when it iscalled. Let us assign thefollowing amortized costs:PUSH 2 ,POP 0 ,MULTIPOP 0 .Note that the amortized cost ofMULTIPOP is a constant (0), whereas the actual cost is variable.Here, all three amortized costs are constant.

In general, the amortized costsof the operations under consideration may differ from each other, and they mayeven differ asymptotically.We shall now show that we can payfor any sequence of stack operations by charging the amortized costs. Supposewe use a dollar bill to represent each unit of cost. We start with an emptystack between the stack data structure and a stack of plates in a cafeteria.

When we push a plate on the stack, we use 1 dollar to paythe actual cost of the push and are left with a credit of 1 dollar(out of the 2 dollars charged), which we leave on topof the plate. At any point in time, every plate on the stack has a dollar ofcredit on it. The dollar stored on the plate serves as prepayment for the costof popping it from the stack. When we execute a POP operation, we charge theoperation nothing and pay its actual cost using the credit stored in the stack.To pop a plate, we take the dollar of credit off the plate and use it to paythe actual cost of the operation.Thus, by charging the PUSHoperation a little bit more, we can charge the POPOperation nothing costs of the operations under consideration may differfrom each other, and theymay even differ asymptotically.Moreover, we can also chargeMULTIPOP operations nothing.

To pop the firstplate, we take the dollar ofcredit off the plate and use it to pay the actual cost of aPOP operation. To pop a secondplate, we again have a dollar of credit on the plateto pay for the POP operation, andso on. Thus, we have always charged enoughup front to pay for MULTIPOPoperations. In other words, since each plate on thestack has 1 dollarof credit on it, and the stack always has a nonnegative number ofplates, we have ensured that theamount of credit is always nonnegative. Thus, forany sequence of n PUSH,POP, andMULTIPOP operations, the total amortized costis an upper bound on the totalactual cost. Since the total amortized cost is O.n/,so is the total actual cost.

Incrementinga binary counterAs another illustration of theaccounting method, we analyze the INCREMENT operationon a binary counter that startsat zero. As we observed earlier, the runningtime of this operation isproportional to the number of bits flipped, which we shalluse as our cost for this example.Let us once again use a dollar bill to representeach unit of cost (the flippingof a bit in this example).For the amortized analysis, letus charge an amortized cost of 2 dollars to set abit to 1.

When a bit is set, we use 1 dollar (out ofthe 2 dollars charged) to payfor the actual setting of thebit, and we place the other dollar on the bit as credit tobe used later when we flip thebit back to 0. At any point in time, every 1 inthecounter has a dollar of credit onit, and thus we can charge nothing to reset a bitto 0; we just payfor the reset with the dollar bill on the bit.Now we can determine theamortized cost of INCREMENT. The cost of resettingthe bits within the while loop is paid for by the dollarson the bits that are reset. TheINCREMENT procedure sets at mostone bit, in line 6, and therefore the amortizedcost of an INCREMENT operation isat most 2 dollars. The number of 1sin thecounter never becomes negative,and thus the amount of credit stays nonnegativeat all times.

Thus, for n INCREMENToperations, the total amortized cost is O(n)which bounds the totalactual cost.Potential Method:this method of amortized analysis shows the prepaidwork as “potential energy,” orjust “potential,” which can be released to pay forfuture operations.We associate the potential withdata structure completelyInstead of specific objectswithin the data structure.Working:View the bank account as the potential energy (as in physics) of the dynamic set.Startwith an initial data structure D0.Operationi transforms Di–1 to Di. Thecost of operation i is ci.Definea potential function F :{Di} ® R, suchthat F(D0 ) = 0 and F(Di ) ³ 0 for all i.

Theamortized cost ?iwith respect to F is defined to be ?i = ci +F(Di) – F(Di–1). Likethe accounting method, but think of the credit as potential stored with the entire data structure. q Accounting method stores credit with specificobjects while potential method stores potential in the data structure as awhole.q Can release potential to pay for futureoperations Most flexible of the amortized analysismethods ). ?i =ci + F(Di) – F(Di–1)q If DFi > 0, then ?i> ci. Operation i stores work in the data structure for later use.

q If DFi< 0, then ?i < ci. The data structure delivers up stored work tohelp pay for operation i.q Thetotal amortized cost of n operations is Summing both sidestelescopically. since F(Dn) ³ 0 and F(D0 ) = 0. Stack Example:Define: f(Di)= #items in stack Thus, f(D0)=0.

Plug infor operations:Push: ?i = ci + f(Di)- f(Di-1) = 1 + j – (j-1) = 2Pop: ?i = ci + f(Di)- f(Di-1) = 1 + (j-1) – j = 0Multi-pop: ?i = ci + f(Di)- f(Di-1) = k’ + (j-k’) – j k’=min(|S|,k) = 0Binary CounterDefine the potential of thecounter after the ith operation by F(Di) = bi,the number of 1’s in the counter after the ith operation.Note:• F(D0 ) = 0,• F(Di) ³ 0 for all i.Example:0 0 0 1 0 1 0 0 0 0 1$10 1$1 0 Accounting methodAssume ith INCREMENT resets tibits (in line 3).Actual cost ci = (ti+ 1)Number of 1’s after ith operation: bi = bi–1– ti + 1The amortized cost of the i th INCREMENT is?i = ci + F(Di) – F(Di–1) = (ti+ 1) + (1 – ti) = 2Therefore, n INCREMENTs cost Q(n) inthe worst case.