Sunday, December 29, 2019
The John F. Kennedy Assassination - 1195 Words
The day of November 22, 1963 in Dallas, Texas was a big one. In fact, it is one of the most talked about topics in all of America. The John F. Kennedy assassination. Till this day in 2016, fifty-three years later, this is still a very relevant topic. What really happened that day? Who really assassinated him? Was it someone that actually had something against JFK as an individual, or was it, as most people think of it, a conspiracy theory? I am going to talk about the life of Kennedy before presidency, during and how it led to one of the biggest tragedies in America. May 29, 1917 in Brookline, Massachusetts, the nations first born president in the twentieth century. Kennedy grew up in a wealthy home with wealthy parents. Much like our current president-elect, except in this case, Kennedy was much cooler and not irritating. Kennedy grew up in a home with political history because his grandfather was Mayor of Boston. Kennedyââ¬â¢s father made his wealth in the stock market and other big businesses. His father working in the stock market made him able to take his money out before the big crash of 1929. Thus, still being a wealthy man and having his nine children enjoy their life as a wealthy family. The Kennedy family grew up privileged going to expensive private schools, going around the seas or lakes in their boats and visiting their summer homes where all the fun happened. After the kids had their fun, it was time to focus on a more serious matter, which was going to college.Show MoreRelatedThe Assassination Of John F. Kennedy982 Words à |à 4 Pages Oââ¬â¢Reilly and Dugardââ¬â¢s book, Killing Kennedy, is about the events leading to President John F. Kennedy being shot, as well as what happened after the assassination. This book also describes the rise and fall of John F. Kennedy. The authors also wrote about the Cold War, Kennedy dealing with communism, and threats of crime. January of 1961, the cold war was growing stronger and Kennedy was struggling with communism. During all of this happening, he was learning what it meant to be a president. HeRead MoreThe Assassination Of John F. Kennedy1620 Words à |à 7 Pages Ever since the assassination of John F. Kennedy in 1963, there has been controversy over whether the true gunman was held accountable. The United States Government claimed that it was an easy, open and closed case. They found Lee Harvey Oswald, close to ground zero, with a freshly fired riffle, immediately after JFK was shot. Contrary to the governments report, skeptics argue a vast scope of conspiracies to shed light on what they believe happened that day; ideas ranging from magic bullets, multipleRead MoreThe Kennedy Assassinations By John F. Kennedy Essay1486 Words à |à 6 Pages Decades later, the Kennedy assassinations and surrounding mysteries continue holding public interest. Although their notoriety as charismatic leaders is a significant contribution, other factors regarding societal psychology deserve consideration whilst exploring this phenomenon. With these events occurring during a time that allows living witnesses, modern accessible evidence, various media covera ge, and visible modern impact, the mysterious Kennedy assassinations have the capacity to encourageRead MoreJohn F. Kennedy Assassination1618 Words à |à 7 PagesJohn F. Kennedy Assassination Was John F. Kennedyââ¬â¢s assassination a single shooter or was it a conspiracy? Since November 22, 1963 people around the world have wondered who it was that shot President Kennedy, and what for. So many questions have formed around this event, not just about who the shooter was, but also questions like what might the world have been like today if the shooting didnââ¬â¢t happen? The Kennedy assassination has been a mystery for many years. A lot of people hear about the differentRead MoreAssassination Of John F. Kennedy1002 Words à |à 5 PagesThe Assassination of John F. Kennedy ââ¬Å"Our most basic common link is that we all inhabit this planet. We all breathe the same air. We all cherish our childrenââ¬â¢s future. And we are all mortal.â⬠President Kennedy stated in his commencement speech at American University on June 10, 1963. John F. Kennedy was an American politician who served as the 35th President of the United States from January 1961 to his assassination in November 1963. There are numerous conspiracy theories involving Kennedyââ¬â¢s assassinationRead MoreAssassination of John F. Kennedy931 Words à |à 4 PagesThe John F. Kennedy assassination is believed to be one of the most controversial and debated topics in American History. JFK was one of the most beloved presidents of our time. Other assassinations of presidents didnââ¬â¢t have as many Conspiracy theories compared to the JFK assassination on November 22nd, 1963. Some of the theories include a Government cover-up, Mafia influence, and Cuban President Fidel Castro (Stern). T he assassination of John F. Kennedy in Dallas, Texas, raised many questions thatRead MoreThe Assassination Of John F. Kennedy1500 Words à |à 6 PagesOn November 23, 1963, three shots were fired at President John F. Kennedyââ¬â¢s limousine in Dallas, Texas. The first shot went through the presidentââ¬â¢s neck, the second was the fatal shot that would ultimately end Kennedyââ¬â¢s life. There is a lot of speculation about what really took place in the assassination of John F. Kennedy. Many people believe that Lee Harvey Oswald worked alone, but there are many people across the nation who think differently. Many theories can both support and disprove that LeeRead MoreThe Assassination Of John F. Kennedy Essay967 Words à |à 4 PagesThe book I chose to read is The Assassination of John F. Kennedy by Lauren Spencer. It was published in 2002 by The Rosen Publishing Group, Inc. It contains 64 pages. This book not only provides information on the killing of President Kennedy, b ut also information on his life, the arrested murderer s life, and more interesting background information and details. This books main objective is to go deeper into the case of John F. Kennedy s assassination, to discuss personal information about suspectsRead MoreAssassination of John F Kennedy1119 Words à |à 5 Pagessixth floor of the Texas School Book Depository Building. However, did Lee Harvey Oswald, a crazy lunatic act alone in the assassination of President Kennedy. Both first ââ¬â hand knowledge and visual evidence allows people to re ââ¬â examine the events of this day and prove that there were other gunmen involved in the bombardment of our youngest elected president. John F. Kennedy was depicted as a nationwide hero to many Catholics living in the U.S. during the early 1960ââ¬â¢s. He was idolized by severalRead MoreThe Assassination Of John F. Kennedy1626 Words à |à 7 PagesThe Assassination of John F. Kennedy John F. Kennedy, the 35th President of the United States, was assassinated on November 22, 1963 at 12:30 p.m Central Standard Time in Dallas, Texas while riding in a motorcade in Dealey Plaza.[1] Kennedy was fatally shot by Lee Harvey Oswald while he was riding with his wife, Jacqueline, Texas Governor John Connally, and Connally s wife, Nellie, in a presidential motorcade. A ten-month investigation by the Warren Commission from November 1963 to September 1964
Saturday, December 21, 2019
Reconstruction in America - 971 Words
The period of reconstruction in the south was a period of social reconstruction on a scale not previously seen in American history. The Reconstruction era occurred after the Civil War period, and lasted from 1864 to 1877. The Reconstruction period brought upon an era of Martial Law, a change of social consciousness towards slavery and the rights of African Americans, a New South with closer ties to the North. Emancipated Slaves, Northerners, and White Southerners all had different opinions towards the New South and the new found freedom of the emancipated slaves along with the various concepts of freedom. ââ¬Å"We believe our present position is by no means so well understood among the loyal masses of the country, otherwise there would beâ⬠¦show more contentâ⬠¦Provided that the provisions of this section shall not be so construed as to allow any freedman, free Negro, or mulatto to rent or lease any lands or tenements, except in incorporated town or cities in which places, th e corporate authorities shall control the same.â⬠(The Civil Rights of Freedmen in Mississippi 414). Another example from the Black Codes providing the belief that Blacks had enough freedom was ââ¬Å"Be it further enacted, That all freedmen, free Negroes, and mulattoes may intermarry with each other, in the same manner and under the same regulations that are provided by law for white persons: Provided, that the clerk of probate shall keep separate records of the same.â⬠(414). During the period of reconstruction after the Civil War, The Reconstruction era occurred after the Civil War period, and lasted from 1864 to 1877. The Reconstruction period brought upon an era of Martial Law, a change of social consciousness towards slavery and the rights of African Americans, a New South with closer ties to the North. During the years of Reconstruction, AfricanShow MoreRelatedThe And Its Effects On America s Reconstruction866 Words à |à 4 Pagestaxes for government aid programs . While examining South Africaââ¬â¢s reconstruction, Tutu noted that ââ¬Å"Harmony, friendliness, (and) community are the greater goods. Social harmony is for us the greatest good. Anything that subverts, that undermines this sought-after good, is to be avoided like the plague.â⬠He noticed that his nation received greater benefits when the people personally controlled amnesty instead of the courts. Likewise, America receives greater benefits when its citizens control charitableRead MoreThe Reconstruction of America after the Civil War1078 Words à |à 4 Pagestheir next move towards reuniting the divided America was going to be. The period following the end of the Civil War would become known as the ââ¬Å"Reconstruction Era.â⬠An era that raised just as many questions as it did answers. A reconstruction of America that seems to carry on many decades later. The reconstruction of America would decide how the south would rejoin the Union, what was to become of the nearly 3 million black slaves freed, how America was going to recover from such a devastating internalRead MoreAmerica Needs A Second Reconstruction Era1425 Words à |à 6 Pagesis a lie. Racism is not dead; America has elected a president that ran a campaign off of it and people of color are still vastly disadvantaged and underrepresented. We are not all created equal; white women make seventy-four cents to a dollar of a white manââ¬â¢s, and women of color make even less. Over seventy percent of men in prison are men of color. The majority of this countryââ¬â¢s poor are immigrants and people of color. The fight for freedom for all is not over. America, we have problems and the solutionRead MoreThe Reconstruction Era Was A Time For America To Heal,1375 Words à |à 6 PagesThe Reconstruction Era was a time for America to heal, a time to recuperate and move forward, but certain things take longer than others. One issue that took tremendous effort was the advancement of African-Americans. Freedman were freed by law, but still mentally, socioeconomically, and socially bonded to oppression. Even after the Civil War ended, the fight wasnââ¬â¢t over; there was a war within the government itself, and a greater fight for freedman to achieve economic freedom without barriers. AsRead MoreMassive Changes During the Reconstruction Era of America817 Words à |à 3 Pages As a country, America has gone through many political changes. Leaders have come and gone, all of them having different objectives and plans for the future. One period of time in which leaders sought change was 1865 which was the time period known as Reconstruction. Reconstruction was a time period of many different leaders, different goals and different accomplishments. Many debate whether Reconstruction was a success or failure. Success is an event which accomplishes its intended pu rpose,Read More The Failure of the Post Civil War Reconstruction Period in America674 Words à |à 3 Pages After the North won the civil war, it was time to rebuild this nation. This period of reconstruction was supposed to have a profound change on society. Unfortunately this was not the case. Reconstruction did not fundamentally alter this nation. Not to say that nothing happened, but nothing that really made a change or difference happened. First, the control of the south was given right back to the planter elite. Also, even though slavery was abolished; blacks were not free. FinallyRead MoreThe Numerous Changes to America from Reconstruction to the New Deal1582 Words à |à 7 PagesAmerica following Reconstruction was completely different from America during FDRs New Deal. In 1876, the government was based on the ideas of Laissez-faire which meant that government stayed out of the citizens lives. Society in 1876 was dominated by white men who ran the country w hile there were no rights for women, blacks, and immigrants. In 1876, Americans lived on farms in rural America. By the 1930s, America was a welfare state with government just starting to control different aspectsRead MoreThey Say: Ida B. Wells and the Reconstruction of Race, by James W. Davidson. Ida B. Wells as a parallel to African Americans trying to gain empowerment in post-emancipation America1409 Words à |à 6 PagesLana Cox History 121 Professor Adejumobi November 7, 2008 Critical Book Review THEY SAY: IDA B. WELLS AND THE RECONSTRUCTION OF RACE By James West Davidson Ida B. Wells, an African-American woman, and feminist, shaped the image of empowerment and citizenship during post-reconstruction times. The essays, books, and newspaper articles she wrote, instigated the dialogue of race struggles between whites and blacks, while her personal narratives, including two diaries, a travel journal, and anRead MoreThe Shaping Of Our Country1092 Words à |à 5 Pagesseveral different factors, each contributing to it in their own way. Four of the major pivot points that occurred consists of: Jeffersonian democracy, Jacksonian democracy, Civil War/Reconstruction, Revolution/Constitution. However one of them happened to be the most impacting which was the Civil War and Reconstruction. The American Civil War occurred during 1861 to 1865, lasting only five years. Americaââ¬â¢s bloodiest clash resulting in the death of approximately 620,000 Americans and millions moreRead MoreWhy Did Reconstruction Fail870 Words à |à 4 PagesWhy did Reconstruction fail? Reconstruction in the United States is historically known as the time in America, shortly after the Civil War, in which the United States attempted to readdress the inequalities, especially of slavery and many other economic, social and politically issues including the poor relationship between the North and the South of America. These problems were highly significant in America, and a variety of groups in government tried to resolve these problems, but this only led
Friday, December 13, 2019
What is NATO for Free Essays
The North Atlantic Treaty Organization (NATO) is some 1949 alliance involving 26 North America and Europe nations. It objectives are to protect the se4curity and freedom of member states though military and political means. NATO is the principal security association within Europe. We will write a custom essay sample on What is NATO for? or any similar topic only for you Order Now The alliance helps shield allies have modernized their shared strategic theory, upheld NATOââ¬â¢s amalgamated military organization, and carry on conducting mutual military scheduling, exercises and training. The allies have generated fresh fora and policies for boosting dialogue with previously communist nations of eastern and central Europe. Most importantly, NATO has had a major contribution in the enforcement of UN Security Council deliberations within what was once called Yugoslavia (Kaplan, 2004, 22). NATO has some significant function in controlling and containing militarized disputes within eastern and central Europe. It even strives to evade such conflicts by vigorously encouraging stability in what was once the Soviet community. NATO aided in stabilizing Western Europe, the states of which were formerly usually bitter enemies. Through solving the dilemma regarding security as well as offering some institutional system for building of shared security strategies, the alliance has had a contribution in rendering utilization of forceful modes as regards the relationships of the nations within such a region almost inconceivable (Duffeld, 1995). NATO persists in the enhancement of member country security with regard to external hazards through a number of methods. Firstly, NATO upholds the tactical balance within Europe by counterbalancing the lingering danger emanating from the Russian military strength. Secondly, is assists to tackle emerging fresh dangers, encompassing the intricate dangers that could result from the disputes among and within the nations of eastern and central Europe. Thirdly, it obstructs such dangers from occurring by working towards nurturing stability within what was once the soviet community (Churchill, 2006). Western European countries strive to uphold some counterbalance to former Soviet Unionââ¬â¢s residual armed forces power, particularly the nuclear ability of Russia. Another post-cold war function of NATO is shielding of member states from an assortment of freshly emerging dangers. More focus has been directed to potential perils emanating out of Middle East and North Africa, partly due to the proliferation of expertise for developing missiles as well as weapons of mass destruction within such areas. The most prominent among fresh external dangers are however, territorial, ethnic as well as national disputes among and within the eastern and central European nations. These disputes are able to produce many immigrants or as well overflow into neighboring nationsââ¬â¢ territories, NATO member states included. In the most extreme of cases, outside nations could sense the compulsion to get involved, thus stoking broadening of enmity, as happened at the start of World War II. Despite the fact that NATO has not been able to terminate such conflicts so far, the alliance assists in tackling the issues emanating from the disputes through a number of modes. Firstly, NATO shield member nations from probable overflow of armed forces hostilities. Although none of NATO member nations has ever received serious threats in such a way, the allianceââ¬â¢s extensive experience in arranging member nation defenses ensures NATO is adequately ready to handle such emergencies (Sandler, Hartley, 1999, 16). NATO as well assists other nations to avoid being inducted into such conflicts. NATOââ¬â¢s existence assures member nations located near such a zone that they will receive assistance in tackling nearby conflicts in the event that such conflicts shoot up and overflow, thus minimizing the motivation to unilaterally get involved. Instead, the presence of NATO assists in ensuring that military participation of western nations in these disputes, if at all it happens, is consensual and collective. The likelihood of some quick, coordinated response from NATO could deter other nations from interfering (http://www. nato. int/docu/speech/2003/s031103a. htm). NATO in 1992 reached a consensus to avail NATO property in the support of peacekeeping actions sanctioned by the United Nations (UN) Conference on Security and Cooperation in Europe (CSCE). At the beginning of 1994, NATO as well endorsed the construction of some mechanism named Combined Joint Task Forces (C JTF) which would allow member coalitions (coalitions of the willing) to utilize shared alliance possessions for particular actions outside the accord zone. Most spectacularly, NATO has acquired vital experience in what was once Yugoslavia. NATO personnel have imposed the Adriatic maritime barricade as well as a no-fly region over Bosnia. NATO as well offered defensive air authority for United Nations ground forces. They utilized the warning of air ambushes to secure seclusion regions for serious arms around the united nations-selected safe Gorazde zone and Sarajevo. Pursuant to the disintegration of socialism, numerous former soviet community nations have embarked on aggressive economic and political reforms. Europe has substantial stakes in such efforts because failure may result to mass migrations, Domestic strife, armed disputes and direct dangers to surrounding NATO member states as well. NATO encourages stability within the previous soviet community through 2 ways. Firstly, the alliance directly nurtures political restructuring success within the area. Starting in 1990, the alliance has initiated a broad spectrum of institutions and programs for consultation regarding security concerns, most conspicuously the Partnership for Peace (PfP) and the North Atlantic Cooperation Council (NACC). NATO may utilize such initiatives to aid the young regimes to restructure their security structures, planning procedures and policies (Greenwood, 1993). Such fresh arrangements may particularly strengthen democratic management of the military as well as reverence for civilian power through inducting eastern and central European heads to western civil-military associationsââ¬â¢ models. Secondly, the alliance boosts eastern and central European security though reassuring such nations that they will be assisted in case they receive outside threats. This helps such states to abandon possibly destabilizing activities as well as to follow their aggressive domestic restructuring agendas with more confidence. Starting from 1990, NATOââ¬â¢S North Atlantic Council has constantly issued candid oral statements of awareness as happened during 1991ââ¬â¢s soviet coup dââ¬â¢etat attempt. The NACC permits states in the former Soviet Union to state their issues as well as discuss varied issues regularly as they engage their counterparts in NATO as identical partners. The freshly approved PfP provides every member official dialogue with NATO, in the vent that such a member perceives some direct danger to their security, as well as solid military liaisons with NATO member states through contribution to several military operations and activities (http://www. ato. int/docu/speech/2003/s031103a. htm). Since its formative years, NATO has significantly worked towards normalizing relationships among member states. Extremely important among NATOââ¬â¢s intra-alliance roles is reassurance. NATOââ¬â¢s existence assures member states that they should not fear each other. The alliance minimizes th e likelihood of disputes among western European member states in 3 ways including: increasing stability; tying the US to Europe so as to guarantee the upholding of the equilibrium of authority within the area; and inhibiting re-nationalization of such nationââ¬â¢s security strategies. A significant likely cause of conflict between nations is misunderstanding and misperception among nations. Without reliable and detailed data, policy makers could overstate the offensive armed capacities of other nations or misconstrue foreign objectives, usually regarding them as being more antagonistic that they are in the actual sense. They as well are inclined to overlooking the safety issues their own activities could arouse abroad (Kaplan, 2004, 41). Therefore, international relationships are usually characterized by mistrust and suspicion. NATO assists in avoiding the mergence of such damaging dynamics; it instead encourages mutual self-assurance though facilitating elevated intra-alliance honesty. Contribution to NATOââ¬â¢s force strategizing procedure requires member states to share detailed data regarding their armed forces, defense financial statements as well as future strategies. Owing to this institutionalized transparency, member states only hide a few secret from their counterparts, and they possess minimal motivations to do likewise. NATO also nurtures reassurance for member states through undertaking integration of membersââ¬â¢ security strategies. To different but normally significant extents, Nations formulate as well as implement their defense strategies jointly as members of NATO as opposed to on exclusively state basis. Such security strategy denationalization neutralizes the usual competition and enmity for military supremacy that could otherwise happen amongst the key European big shots, it also assists to prevent any usage of armed forces posturing to attain political clout in Europe (Churchill, 2006). In case re-nationalization happens, this could result to issues regarding internal inequities within Western Europe as well as arouse fresh competition, conflict and mistrust. NATO encourages security strategy denationalization in a number of ways. NATOââ¬â¢S consultative arms, force scheduling procedures as well as integrated armed systems assist to develop a shared identity amongst member states. Frequent and comprehensive dialogue results to an elevated level of common understanding. Cooperative force scheduling assists reshape member states armed forces posture in order to reflect NATO-wide, as opposed to, national concerns. Also, assignments to NATOââ¬â¢s military associations and civilian officialdoms socialize military personnel and state officials into some shared NATO customs. Additionally contribution to NATOââ¬â¢s combined military system fosters minimized military independence among member states, particularly within central Europe; because it permits members relinquish or at the minimum deemphasize several components vital for an autonomous military capacity. Numerous European nations, For instance, rely heavily upon the allianceââ¬â¢s multinational space early caution force as well as its combined air protection structures. Small as well as big nations have given up their capability to undertake particular missions, like the sweeping of mines and air surveillance, with the intention of husbanding security resources, after having known that counterpart allies could undertake such missions (Duffeld, 1995). International integration develops a measure of shared control through increasing the extent of joint contribution to operational and organization planning. Therefore, the persistent existence of the multinational military system imposes restraints upon the capability of numerous member states to utilize their armed personnel for purely state objectives, at any rate on the short-to-medium period, as well as assures members regarding the shared objective of their armed might. Without NATO, the likelihood of one nationââ¬â¢s forces raising alarm within another nation would be greater (http://www. direct. gov. uk/en/Governmentcitizensandrights/UKgovernment/TheUKandtheworld/DG_073420). NATO member states regard maintenance of the alliance to be mutually advantageous to them, since it carries on the performance of a number of essential security roles, both internal and external, including incorporation of Canada and the United States into European defense matters. NATO has as well adapted impressively to the dynamic European defense environment, positive example being the experience in Bosnia. Whereas the joint defense of NATO territory is the core function of the NATO alliance, the fresh NATO, through widening its key role to incorporate peacekeeping and crisis handling as well as encouraging cooperation and partnership, including some strategic association with Moscow, has emerged to be the backbone of some European joint defense regime (Sandler, Hartley, 1999, 67). How to cite What is NATO for?, Papers
Thursday, December 5, 2019
Hiroshema Essay Example For Students
Hiroshema Essay HiroshemaWar is an ever changing, advancing typeof combat. From swords to guns, the weapons used are always developingand becoming much more powerful. Nuclear bombs are one of the mostforceful weapons that exist today. On August 6, 1945, during WorldWar II, the United States dropped an Atomic bomb on Hiroshima, a Japanesecity and Military center. About 130,000 people were reported deadinjured, or missing. Another 177,000 were left homeless. Itwas the first Atomic bomb ever used against an enemy. The effectsof this explosion were so devastating and long lasting that they are stillfelt today. Was the United States justified in the droppingof the atomic bomb?On December 7, 1941, Pearl Harbor was deliberatelyattacked by the Japanese. Reports show that 2,400 people were killed and1,300 were wounded. The reason Japan bombed, Pearl Harbor was because thatwas where all of the U.S. Navy ships were kept. They were hoping to takeout the Navy and were almost successful. They expected the aircraf t carriersto be in the harbor, but luckily were not. Although the attack may havebeen a success to the Japanese, it became a huge mistake in the end. Onereason it was a mistake was it caused the U.S. to enter the war. The UnitedStates was the ultimate cause to Japan losing the war. Secondly it madethe Americans angry and determined to destroy the Japanese. Recruitingoffices were flooded with young patriots who wanted to help their countryout. This attack was just an example of what could have happened if thewar had continued. If the war had continued another attack on U.S. soilcould have taken place. This could have turned the 6,000 dead Americancivilians into 9,000 dead civilians. That is one of the main reasons thewar needed to be stopped immediately. The United States made the thought of theAtomic bomb and the building of it possible. The power behind sucha weapon was just what the United States needed. Many scientistsmanufactured and constructed the Atomic bomb, including Enrico Fermi, J. Robert Oppenheimer, and Harold Urey. The group was headed by a UnitedStates Army engineer, Major General Leslie Groves. The United States came up with a list ofcities that could be possible targets for the detonation of the bomb. The list included Hiroshima, Kokura, Niigata, and Nagasaki. Theylater decided that Hiroshima would be the first target. Then in theearly hours of August 6, 1945, the B-29 bomber Enola Gay, along with threeother B-29s, headed out from Tinian Airbase to Hiroshima. They equippedthe Enola Gay with the A-bomb, a single 4-ton nuclear device with 12 poundsof uranium. At 8:15 a.m. (Japanese standard time) the Enola Gay letthe Atomic bomb fall to the ground. The bomb exploded around 2000feet above the ground. The explosion caused all wooden buildingsto collapse within a radius of 1.2 miles. The blast itself demolishedthree fifths of the city within seconds. The United States scientistsestimated that only 20,000 Japanese would die, instead 75,000 people perishedinstantly. Three days after the bombing of Hiroshimait was decided that another Japanese town must be hit with am A-bomb. Three targets remained, the city of Kokura was the chosen target. Because visibility was so poor, due to smoke and pollution they changedthe target to the city of Nagasaki. The smoke and pollution werejust as bad over Kokura, but through a gap in the smog the bombardier spottedthe target. They then released the 4.5 ton bomb, at 11:02 a.m., killing30,000 people instantly. A day after the Nagasaki bombing the Japanesegovernment offered to surrender. This ended the first ever nuclearwar. .u96e34ccf4139f688feb81aed64b6a813 , .u96e34ccf4139f688feb81aed64b6a813 .postImageUrl , .u96e34ccf4139f688feb81aed64b6a813 .centered-text-area { min-height: 80px; position: relative; } .u96e34ccf4139f688feb81aed64b6a813 , .u96e34ccf4139f688feb81aed64b6a813:hover , .u96e34ccf4139f688feb81aed64b6a813:visited , .u96e34ccf4139f688feb81aed64b6a813:active { border:0!important; } .u96e34ccf4139f688feb81aed64b6a813 .clearfix:after { content: ""; display: table; clear: both; } .u96e34ccf4139f688feb81aed64b6a813 { display: block; transition: background-color 250ms; webkit-transition: background-color 250ms; width: 100%; opacity: 1; transition: opacity 250ms; webkit-transition: opacity 250ms; background-color: #95A5A6; } .u96e34ccf4139f688feb81aed64b6a813:active , .u96e34ccf4139f688feb81aed64b6a813:hover { opacity: 1; transition: opacity 250ms; webkit-transition: opacity 250ms; background-color: #2C3E50; } .u96e34ccf4139f688feb81aed64b6a813 .centered-text-area { width: 100%; position: relative ; } .u96e34ccf4139f688feb81aed64b6a813 .ctaText { border-bottom: 0 solid #fff; color: #2980B9; font-size: 16px; font-weight: bold; margin: 0; padding: 0; text-decoration: underline; } .u96e34ccf4139f688feb81aed64b6a813 .postTitle { color: #FFFFFF; font-size: 16px; font-weight: 600; margin: 0; padding: 0; width: 100%; } .u96e34ccf4139f688feb81aed64b6a813 .ctaButton { background-color: #7F8C8D!important; color: #2980B9; border: none; border-radius: 3px; box-shadow: none; font-size: 14px; font-weight: bold; line-height: 26px; moz-border-radius: 3px; text-align: center; text-decoration: none; text-shadow: none; width: 80px; min-height: 80px; background: url(https://artscolumbia.org/wp-content/plugins/intelly-related-posts/assets/images/simple-arrow.png)no-repeat; position: absolute; right: 0; top: 0; } .u96e34ccf4139f688feb81aed64b6a813:hover .ctaButton { background-color: #34495E!important; } .u96e34ccf4139f688feb81aed64b6a813 .centered-text { display: table; height: 80px; padding-left : 18px; top: 0; } .u96e34ccf4139f688feb81aed64b6a813 .u96e34ccf4139f688feb81aed64b6a813-content { display: table-cell; margin: 0; padding: 0; padding-right: 108px; position: relative; vertical-align: middle; width: 100%; } .u96e34ccf4139f688feb81aed64b6a813:after { content: ""; display: block; clear: both; } READ: Beta Carotene EssayYet, while the first atomic bomb was asuccess, it raised many ethical and controversial issues. Most ofthe people in the United States of America supported the use of the atomicbomb, even President Truman commented on what a great invention itwas. Many people, including the scientists that developed thebomb, opposed the bombings and felt that killing that many innocent peoplejust to get an influence in the war was immoral. One famous figure,Albert Einstein was quoted saying, I made one great mistake in my life,when I signed the letter to President Roosevelt recommending that the atomicbombs be made.The atomic bomb was considered a quickand even economica l way to win the war; however, it was a cruel and unusualform of punishment for the Japanese citizens. The weapon that we referto as quick was just the opposite. On one hand, it meant a quick endto the war for the United States, and on the other hand, a slow andpainful death to many innocent Japanese. The effects of radiationpoisoning are horrific, ranging from purple spots on the skin, hair loss,nausea, vomiting, bleeding from the mouth, gums, and throat, weakened immunesystems, to massive internal hemorrhaging, not to mention the disfiguringradiation burns. The effects of the radiation poisoning continued to showup until about a month after the bombing. In fact the bomb also killedor permanently damaged fetuses in the womb. Death and destruction comehand in hand with war; however, a quick death is always more humane.
Thursday, November 28, 2019
Albinism Essays - Skin Pigmentation, Albinism,
Albinism Albinism is a term used to describe people and animals that have little or no pigment in their eyes, skin, or hair. People with this condition have inherited genes that do not produce normal amounts of a pigment called melanin. It is equally common to all races and consists of two major classes. The first, Oculocutaneous albinism includes eyes, skin, and hair. Ocular, the second, involves mainly the eye. The oculocutaneous variety can be divided into 10 different types, the most common being ?ty-negative? and ?ty-positive.? Ty-negative leaves the person with no melanin pigmentation, hampers vision to a much more severe degree then ty-positive, and is caused by a genetic defect in the enzyme called tyrosinase. People with ty-positive will have very slight pigmentation and fewer vision problems. Ocular albinism may give the bearer slightly light hair and skin color, compared with the rest of their family, as well as the more obvious affects to their eye. The pigment loss may allow for involuntary back-and-forth movement of the eyes, crossed eyes, and sensitivity to bright light. Nerves going from the brain to the eye are not routed properly and have more never fibers crossing to the opposite side of the brain than normal. Both types of albinism are passed from parent to child and almost always require that both parents carry an albinism gene. This is referred to as autosomal recessive inheritance and the parents may have normal pigmentation, yet carry the gene and have a baby with albinism. A new test can now identify carriers of the gene for ty-negative and any other types where the tyrosinase enzyme doesn't function. A blood sample is used to determine if the gene is present by reading the DNA. X-linked inheritance differs from autosomal recessive inheritance, because only the mother carries the gene. The albinism gene is passed on the X chromosome from the mother to almost always her son. It can be recognized by ophthalmologist because of subtle eye changes. Albinism is unselective in race. Caucasians and non-Caucasians share this gene defect equally. One in 17,000 people have some type of albinism. In the autosomal recessive inheritance, if both parents carry the gene-yet neither have albinism, there is a one in four chance that the baby from each pregnancy will be born with albinism. Treatment of albinism consists primarily of ?visual rehabilitation.' Surgery can be used to correct crossed eyes, but does not correct problem with the routing of nerves, so does not give binocular vision. Sensitivity to bright light can be combated with tints or sunglasses. Some specific optical aids, such as bifocals and magnifiers, are also very helpful to this condition. The affects of this disease are not reversible, however because it is a part of their genetic makeup and can only be help with some of these types of aids. Albinism is a very misunderstood condition and because of this children can have a tough childhood. They are prone to isolation due to the misunderstandings. People question there parenthood, possibly thinking that it is a mixed marriage and outcast them. They may face criticism and ridicule in the classroom. Other students will not be able to understand why they appear this way and deal with it the best way they know how, laughing, smirking, giggling, etc. Children with albinism may need special emotional support from both their parents and teachers. They should be included in all group activities as well, so they don't stand out.
Sunday, November 24, 2019
The 19th Century Bone Wars
The 19th Century Bone Wars When most people think of the Wild West, they picture Buffalo Bill, Jesse James, and caravans of settlers in covered wagons. But for paleontologists, the American west in the late 19th century conjures up one image above all: the enduring rivalry between two of this countrys greatest fossil hunters, Othniel C. Marsh and Edward Drinker Cope. The Bone Wars, as their feud became known, stretched from the 1870s well into the 1890s, and resulted in hundreds of new dinosaur findsnot to mention reams of bribery, trickery, and outright theft, as well get to later. (Knowing a good subject when it sees one, HBO recently announced plans for a movie version of the Bone Wars starring James Gandolfini and Steve Carell; sadly, Gandolfinis sudden death has put the project in limbo.) In the beginning, Marsh and Cope were cordial, if somewhat wary, colleagues, having met in Germany in 1864 (at the time, western Europe, not the United States, was at the forefront of paleontology research). Part of the trouble stemmed from their different backgrounds: Cope was born into a wealthy Quaker family in Pennsylvania, while Marshs family in upstate New York was comparatively poor (albeit with a very rich uncle, who enters the story later). Its probable that, even then, Marsh considered Cope a bit of a dilettante, not really serious about paleontology, while Cope saw Marsh as too rough and uncouth to be a true scientist. The Fateful Elasmosaurus Most historians trace the start of the Bone Wars to 1868, when Cope reconstructed a strange fossil sent to him from Kansas by a military doctor. Naming the specimen Elasmosaurus, he placed its skull on the end of its short tail, rather than its long neck (to be fair to Cope, to that date no had ever seen an aquatic reptile with such out-of-whack proportions). When he discovered this error, Marsh (as the legend goes) humiliated Cope by pointing it out in public, at which point Cope tried to buy (and destroy) every copy of the scientific journal in which he had published his incorrect reconstruction. This makes for a good storyand the fracas over Elasmosaurus certainly contributed to the enmity between the two menbut the Bone Wars likely started on a more serious note. Cope had discovered the fossil site in New Jersey that yielded the fossil of Hadrosaurus, named by the two mens mentor, the famous paleontologist Joseph Leidy. When he saw how many bones had yet to be recovered from the site, Marsh paid the excavators to send any interesting finds to him, rather than to Cope. Cope soon found out about this gross violation of scientific decorum, and the Bone Wars began in earnest. Into the West What kicked the Bone Wars into high gear was the discovery, in the 1870s, of numerous dinosaur fossils in the American west (some of these finds were made accidentally, during excavation work for the Transcontinental Railroad). In 1877, Marsh received a letter from Colorado schoolteacher Arthur Lakes, describing the saurian bones he had found during a hiking expedition; Lakes sent sample fossils to both Marsh and (because he didnââ¬â¢t know if Marsh was interested) Cope. Characteristically, Marsh paid Lakes $100 to keep his discovery a secretand when he discovered that Cope had been notified, dispatched an agent west to secure his claim. Around the same time, Cope was tipped off to another fossil site in Colorado, which Marsh tried (unsuccessfully) to horn in on. By this time, it was common knowledge that Marsh and Cope were competing for the best dinosaur fossilswhich explains the subsequent intrigues centered on Como Bluff, Wyoming. Using pseudonyms, two workers for the Union Pacific Railroad alerted Marsh to their fossil finds, hinting (but not stating explicitly) that they might strike a deal with Cope if Marsh didnt offer generous terms. True to form, Marsh dispatched another agent, who made the necessary financial arrangementsand soon the Yale-based paleontologist was receiving boxcars of fossils, including the first specimens of Diplodocus, Allosaurus and Stegosaurus. Word about this exclusive arrangement soon spreadnot least because the Union Pacific employees leaked the scoop to a local newspaper, exaggerating the prices Marsh had paid for the fossils in order to bait the trap for the wealthier Cope. Soon, Cope sent his own agent westward, and when these negotiations proved unsuccessful (possibly because he wasnt willing to pony up enough money), he instructed his prospector to engage in a bit of fossil-rustling and steal bones from the Como Bluff site, right under Marshs nose. Soon afterward, fed up with Marshs erratic payments, one of the railroad men began working for Cope instead, turning Como Bluff into the epicenter of the Bone Wars. By this time, both Marsh and Cope had relocated westward, and over the next few years engaged in such hijinks as deliberately destroying uncollected fossils and fossil sites (so as to keep them out of each others hands), spying on each others excavations, bribing employees, and even stealing bones outright. According to one account, workers on the rival digs once took time out from their labors to pelt each other with stones! Next Page: The Bone Wars Get Personal Cope and Marsh, Bitter Enemies to the Last By the 1880ââ¬â¢s, it was clear that Othniel C. Marsh was winning the Bone Wars. Thanks to the support of his wealthy uncle, George Peabody (who lent his name to the Yale Peabody Museum of Natural History), Marsh could hire more employees and open more dig sites, while Edward Drinker Cope slowly but surely fell behind. It didnt help matters that other parties, including a team from Harvard University, now joined the dinosaur gold rush. Cope continued to publish numerous papers, but, like a political candidate taking the low road, Marsh made hay out of every tiny mistake he could find. Cope soon had his opportunity for revenge. In 1884, Congress began an investigation into the U.S. Geological Survey, which Marsh had been appointed the head of a few years before. Cope recruited a number of Marshs employees to testify against their boss (who wasnt the easiest person in the world to work for), but Marsh connived to keep their grievances out of the newspapers. Cope then upped the ante: drawing on a journal he had kept for two decades, in which he meticulously listed Marshs numerous felonies, misdemeanors and scientific errors, he supplied the information to a journalist for the New York Herald, which ran a sensational series about the Bone Wars. Marsh issued a rebuttal in the same newspaper, hurling similar accusations against Cope. In the end, this public airing of dirty laundry (and dirty fossils) didnt benefit either party. Marsh was asked to resign his lucrative position at the Geological Survey, and Cope, after a brief interval of success (he was appointed head of the National Association for the Advancement of Science), was beset by poor health and had to sell off portions of his hard-won fossil collection. By the time Cope died in 1897, both men had squandered their considerable fortunes. Characteristically, though, Cope prolonged the Bone Wars even from his grave. One of his last requests was that scientists dissect his head after his death to determine the size of his brain, which he was certain would be bigger than Marshs. Wisely, perhaps, Marsh declined the challenge, and to this day, Copes unexamined head rests in storage at the University of Pennsylvania. The Bone Wars: Let History Judge As tawdry, undignified, and out-and-out ridiculous as the Bone Wars occasionally were, they had a profound effect on American paleontology. In the same way competition is good for commerce, it can also be good for science: so eager were Othniel C. Marsh and Edward Drinker Cope to one-up each other that they discovered many more dinosaurs than if theyd merely engaged in a friendly rivalry. The final tally was truly impressive: Marsh discovered 80 new dinosaur genera and species, while Cope named a more-than-respectable 56. The fossils discovered by Marsh and Cope also helped to feed the American publics increasing hunger for new dinosaurs. Each major discovery was accompanied by a wave of publicity, as magazines and newspapers illustrated the latest amazing findsand the reconstructed skeletons slowly but surely made their way to major museums, where they still reside to the present day. You might say that popular interest in dinosaurs really began with the Bone Wars, though its arguable that it would have come about naturally, without all the bad feelings! The Bone Wars had a couple of negative consequences, as well. First, paleontologists in Europe were horrified by the crude behavior of their American counterparts, which left a lingering, bitter distrust that took decades to dissipate. And second, Cope and Marsh described and reassembled their dinosaur finds so quickly that they were occasionally careless. For example, a hundred years of confusion about Apatosaurus and Brontosaurus can be traced directly back to Marsh, who put a skull on the wrong bodythe same way Cope did with Elasmosaurus, the incident that started the Bone Wars in the first place!
Thursday, November 21, 2019
Fluoridation and Toxicity Issues Assignment Example | Topics and Well Written Essays - 1500 words - 1
Fluoridation and Toxicity Issues - Assignment Example Fluoridation might actually result in the darkening of the teeth or dental fluorosis and may even affect the gums (The Debate over Adding Fluoride in Our Water, 2013). This will result in something like what American researchers called the Colorado Brown Stain, which was a result of excessive use of fluoride and which affected some children from 1909 to 1915. Moreover, the darkening of the teeth was not related to tooth decay (The Story of Fluoridation, 2011). In a study by Parnell et al. (2009), there have been 88 studies that revealed that fluorosis may be derived from drinking of water treated with fluoride. Fluoride consumption in drinking water may also be associated with problems concerning the health of the skeletal system. The most common is bone fracture (Limeback, 2000). The most common of these bone fracture types is hip fracture (Diesendorf et al., 1997). Moreover, data from 29 studies prove that long-term consumption of drinking water with fluoride can result in bone fracture (Parnell et al., 2009). Indeed, even though these studies are mostly from the United States, it does not change the fact that the potential harmful effects of fluoride can happen to any group of people in the world as long as they are exposed to relatively large amounts of the chemical in water. The third and perhaps most difficult concern, which I hope Dr. Nokes will bring up and clarify, is that an excess of fluoride in the human body is simply ââ¬Å"detrimental to long-term dental and overall healthâ⬠(The Debate over Adding Fluoride in Our Water, 2013). This is indeed very alarming because people are actually not familiar with the standard amount of fluoride that a human body must take in as well as the maximum levels of the chemical that the body can handle. Although the Environmental Protection Agency of the United States points out 4 mg/L as the standard maximum tolerable aount of fluoride that Americans can take in, the data may be
Wednesday, November 20, 2019
What Factors did Account for South Africas 1994 Transition to Essay
What Factors did Account for South Africas 1994 Transition to Democracy - Essay Example This period was associated with racial, social, political and economic segregation which led to apartheid. On February 2nd 1990, President FW Clerk released a speech that hinted to a decisive moment in South Africaââ¬â¢s struggle for democracy (Decalo 7-35)1. The day is highly regarded by many South Africans as it marked the commissioning of the release of Nelson Mandela (11th of February) and other detainees who had been arrested in the process of the struggle. This paved way for open negotiations. South Africa had been going through long struggles for democracy in a sub-society that chiefly consisted of whites at the helm of leadership and power and non-white sub-society with little or no influence in governance matters. Factors that led to the transition in South Africa can be classified as both internal and external. In his book, Coups and Army Rule in Africa: Motivations and Constraints, Samuel Decalo, argues that the transitions that led to democratization in South Africa we re majorly internal. The democratic changes that occurred in SA are also linked to international factors. According to Sola Akinrinade and Amadu Sesay in their book Africa in the Post-Cold War International System (eds.) the external factor that influenced transition in South Africa includes democratization in Eastern Europe and the End of Cold War. ... The limited freedom of expression saw most opposition parties denied access to the media when conducting their political functions. The media content was normally dominated by news on the authoritarian governments. This had to be curbed with revolution being the only effective tool (Decalo 20).4 Another factor suggested by Decalo is the institutional factor (25-35).5 Most of the dynamics that characterized the negotiations were institutionalized in the post apartheid period. This led to a significant stability and consolidation of democracy. The rules, norms, formal and informal principles were widely accepted by the majority making the transition process possible. According to Decalo, the most crucial dynamic that underwent institutionalization is constitutionalism whereby all political groupings and civil organizations accepted the rule of law. The democratic changes that occurred in SA are also linked to international factors. According to Sola Akinrinade and Amadu Sesay in their book Africa in the Post-Cold War International System (eds.) the external factor that influenced transition in South Africa includes democratization in Eastern Europe and the End of Cold War. The end of World War II saw a rise in global political struggle for power between the United States and its associates from the West, and the Soviet Union and the Warsaw Pact, allies of the Soviet Union in Eastern Europe (Akinrinade & Sesay, 92-128).6 According to Akinrinade and Sesay (1998), the Eastern Europe group had less developed governments democratically and in the 1980s, the Soviet Union and its Eastern Europe allies went through vigorous democratic transitions, a period that also saw East and South East Asian countries leave
Monday, November 18, 2019
WORLDCOM Essay Example | Topics and Well Written Essays - 1250 words
WORLDCOM - Essay Example (1995), and MFS Communications Company (1996). WorldCom also acquired the mother company of Digex ââ¬â the Intermedia Communications and sold all of its intermediaââ¬â¢s non-Digex assets to Allegiance Telecom. Until 2003, WorldCom was considered a telecom giant and second largest long distance provider in the U.S. A one-time high flyer like WorldCom should be held legally, ethically and socially responsible for having entered into a lot of business contracts with its customers and suppliers. WorldCom should be responsible for its intensive mergers and failure to use the public funds (coming from public shareholders) carefully. It remains questionable for a huge company that has been consistently greedy in entering into mergers with other similar companies to suddenly file a bankruptcy last July 2002. As of 2005, WorldCom is still facing some court trials regarding this matter. It is the legal responsibility of WorldComââ¬â¢s employers to ensure that the company directors operate within the societyââ¬â¢s accepted laws and regulations. It is their legal responsibility to register and communicate with the shareholders and ensure them that dividends are paid on time. Top management should also monitor on the companyââ¬â¢s financial statement. WorldCom is facing huge financial and legal problems. The company is considered to have defrauded its investors by overstating the companyââ¬â¢s earnings up to nearly $10 billion wherein the top management of the WorldCom also gain some profits from their own criminal acts. WorldCom has to be held responsible for taking investorsââ¬â¢ money in excess of $176 billion and causing WorldComââ¬â¢s employees, the state pension funds and shareholders through the lost of jobs, worthless stocks, and losses of 401 (k) savings. 2 The act of overstating of the companyââ¬â¢s earning is clearly a criminal act and it is punishable by law. One way or another, someone has to be held responsible for such unprofessional
Friday, November 15, 2019
Credit Risk Dissertation
Credit Risk Dissertation CREDIT RISK EXECUTIVE SUMMARY The future of banking will undoubtedly rest on risk management dynamics. Only those banks that have efficient risk management system will survive in the market in the long run. The major cause of serious banking problems over the years continues to be directly related to lax credit standards for borrowers and counterparties, poor portfolio risk management, or a lack of attention to deterioration in the credit standing of a banks counterparties. Credit risk is the oldest and biggest risk that bank, by virtue of its very nature of business, inherits. This has however, acquired a greater significance in the recent past for various reasons. There have been many traditional approaches to measure credit risk like logit, linear probability model but with passage of time new approaches have been developed like the Credit+, KMV Model. Basel I Accord was introduced in 1988 to have a framework for regulatory capital for banks but the ââ¬Å"one size fit allâ⬠approach led to a shift, to a new and comprehensive approach -Basel II which adopts a three pillar approach to risk management. Banks use a number of techniques to mitigate the credit risks to which they are exposed. RBI has prescribed adoption of comprehensive approach for the purpose of CRM which allows fuller offset of security of collateral against exposures by effectively reducing the exposure amount by the value ascribed to the collateral. In this study, a leading nationalized bank is taken to study the steps taken by the bank to implement the Basel- II Accord and the entire framework developed for credit risk management. The bank under the study uses the credit scoring method to evaluate the credit risk involved in various loans/advances. The bank has set up special software to evaluate each case under various parameters and a monitoring system to continuously track each assets performance in accordance with the evaluation parameters. CHAPTER 1 INTRODUCTION 1.1 Rationale Credit Risk Management in todays deregulated market is a big challenge. Increased market volatility has brought with it the need for smart analysis and specialized applications in managing credit risk. A well defined policy framework is needed to help the operating staff identify the risk-event, assign a probability to each, quantify the likely loss, assess the acceptability of the exposure, price the risk and monitor them right to the point where they are paid off. Generally, Banks in India evaluate a proposal through the traditional tools of project financing, computing maximum permissible limits, assessing management capabilities and prescribing a ceiling for an industry exposure. As banks move in to a new high powered world of financial operations and trading, with new risks, the need is felt for more sophisticated and versatile instruments for risk assessment, monitoring and controlling risk exposures. It is, therefore, time that banks managements equip them fully to grapple with the demands of creating tools and systems capable of assessing, monitoring and controlling risk exposures in a more scientific manner. According to an estimate, Credit Risk takes about 70% and 30% remaining is shared between the other two primary risks, namely Market risk (change in the market price and operational risk i.e., failure of internal controls, etc.). Quality borrowers (Tier-I borrowers) were able to access the capital market directly without going through the debt route. Hence, the credit route is now more open to lesser mortals (Tier-II borrowers). With margin levels going down, banks are unable to absorb the level of loan losses. Even in banks which regularly fine-tune credit policies and streamline credit processes, it is a real challenge for credit risk managers to correctly identify pockets of risk concentration, quantify extent of risk carried, identify opportunities for diversification and balance the risk-return trade-off in their credit portfolio. The management of banks should strive to embrace the notion of ââ¬Ëuncertainty and risk in their balance sheet and instill the need for approaching credit administration from a ââ¬Ërisk-perspective across the system by placing well drafted strategies in the hands of the operating staff with due material support for its successful implementation. There is a need for Strategic approach to Credit Risk Management (CRM) in Indian Commercial Banks, particularly in view of; (1) Higher NPAs level in comparison with global benchmark (2) RBI s stipulation about dividend distribution by the banks (3) Revised NPAs level and CAR norms (4) New Basel Capital Accord (Basel -II) revolution 1.2 OBJECTIVES To understand the conceptual framework for credit risk. To understand credit risk under the Basel II Accord. To analyze the credit risk management practices in a Leading Nationalised Bank 1.3 RESEARCH METHODOLOGY Research Design: In order to have more comprehensive definition of the problem and to become familiar with the problems, an extensive literature survey was done to collect secondary data for the location of the various variables, probably contemporary issues and the clarity of concepts. Data Collection Techniques: The data collection technique used is interviewing. Data has been collected from both primary and secondary sources. Primary Data: is collected by making personal visits to the bank. Secondary Data: The details have been collected from research papers, working papers, white papers published by various agencies like ICRA, FICCI, IBA etc; articles from the internet and various journals. 1.4 LITERATURE REVIEW * Merton (1974) has applied options pricing model as a technology to evaluate the credit risk of enterprise, it has been drawn a lot of attention from western academic and business circles.Mertons Model is the theoretical foundation of structural models. Mertons model is not only based on a strict and comprehensive theory but also used market information stock price as an important variance toevaluate the credit risk.This makes credit risk to be a real-time monitored at a much higher frequency.This advantage has made it widely applied by the academic and business circle for a long time. Other Structural Models try to refine the original Merton Framework by removing one or more of unrealistic assumptions. * Black and Cox (1976) postulate that defaults occur as soon as firms asset value falls below a certain threshold. In contrast to the Merton approach, default can occur at any time. The paper by Black and Cox (1976) is the first of the so-called First Passage Models (FPM). First passage models specify default as the first time the firms asset value hits a lower barrier, allowing default to take place at any time. When the default barrier is exogenously fixed, as in Black and Cox (1976) and Longstaff and Schwartz (1995), it acts as a safety covenant to protect bondholders. Black and Cox introduce the possibility of more complex capital structures, with subordinated debt. * Geske (1977) introduces interest-paying debt to the Merton model. * Vasicek (1984) introduces the distinction between short and long term liabilities which now represents a distinctive feature of the KMV model. Under these models, all the relevant credit risk elements, including default and recovery at default, are a function of the structural characteristics of the firm: asset levels, asset volatility (business risk) and leverage (financial risk). * Kim, Ramaswamy and Sundaresan (1993) have suggested an alternative approach which still adopts the original Merton framework as far as the default process is concerned but, at the same time, removes one of the unrealistic assumptions of the Merton model; namely, that default can occur only at maturity of the debt when the firms assets are no longer sufficient to cover debt obligations. Instead, it is assumed that default may occur anytime between the issuance and maturity of the debt and that default is triggered when the value of the firms assets reaches a lower threshold level. In this model, the RR in the event of default is exogenous and independent from the firms asset value. It is generally defined as a fixed ratio of the outstanding debt value and is therefore independent from the PD. The attempt to overcome the shortcomings of structural-form models gave rise to reduced-form models. Unlike structural-form models, reduced-form models do not condition default on the value of the firm, and parameters related to the firms value need not be estimated to implement them. * Jarrow and Turnbull (1995) assumed that, at default, a bond would have a market value equal to an exogenously specified fraction of an otherwise equivalent default-free bond. * Duffie and Singleton (1999) followed with a model that, when market value at default (i.e. RR) is exogenously specified, allows for closed-form solutions for the term-structure of credit spreads. * Zhou (2001) attempt to combine the advantages of structural-form models a clear economic mechanism behind the default process, and the ones of reduced- form models unpredictability of default. This model links RRs to the firm value at default so that the variation in RRs is endogenously generated and the correlation between RRs and credit ratings reported first in Altman (1989) and Gupton, Gates and Carty (2000) is justified. Lately portfolio view on credit losses has emerged by recognising that changes in credit quality tend to comove over the business cycle and that one can diversify part of the credit risk by a clever composition of the loan portfolio across regions, industries and countries. Thus in order to assess the credit risk of a loan portfolio, a bank must not only investigate the creditworthiness of its customers, but also identify the concentration risks and possible comovements of risk factors in the portfolio. * CreditMetrics by Gupton et al (1997) was publicized in 1997 by JP Morgan. Its methodology is based on probability of moving from one credit quality to another within a given time horizon (credit migration analysis). The estimation of the portfolio Value-at-Risk due to Credit (Credit-VaR) through CreditMetrics A rating system with probabilities of migrating from one credit quality to another over a given time horizon (transition matrix) is the key component of the credit-VaR proposed by JP Morgan. The specified credit risk horizon is usually one year. A rating system with probabilities of migrating from one credit quality to another over a given time horizon (transition matrix) is the key component of the credit-VaR proposed by JP Morgan. The specified credit risk horizon is usually one year. * (Sy, 2007), states that the primary cause of credit default is loan delinquency due to insufficient liquidity or cash flow to service debt obligations. In the case of unsecured loans, we assume delinquency is a necessary and sufficient condition. In the case of collateralized loans, delinquency is a necessary, but not sufficient condition, because the borrower may be able to refinance the loan from positive equity or net assets to prevent default. In general, for secured loans, both delinquency and insolvency are assumed necessary and sufficient for credit default. CHAPTER 2 THEORECTICAL FRAMEWORK 2.1 CREDIT RISK: Credit risk is risk due to uncertainty in a counterpartys (also called an obligors or credits) ability to meet its obligations. Because there are many types of counterpartiesââ¬âfrom individuals to sovereign governmentsââ¬âand many different types of obligationsââ¬âfrom auto loans to derivatives transactionsââ¬âcredit risk takes many forms. Institutions manage it in different ways. Although credit losses naturally fluctuate over time and with economic conditions, there is (ceteris paribus) a statistically measured, long-run average loss level. The losses can be divided into two categories i.e. expected losses (EL) and unexpected losses (UL). EL is based on three parameters: à ·Ã¢â ¬Ã The likelihood that default will take place over a specified time horizon (probability of default or PD) à · â⠬à The amount owned by the counterparty at the moment of default (exposure at default or EAD) à ·Ã¢â ¬Ã The fraction of the exposure, net of any recoveries, which will be lost following a default event (loss given default or LGD). EL = PD x EAD x LGD EL can be aggregated at various different levels (e.g. individual loan or entire credit portfolio), although it is typically calculated at the transaction level; it is normally mentioned either as an absolute amount or as a percentage of transaction size. It is also both customer- and facility-specific, since two different loans to the same customer can have a very different EL due to differences in EAD and/or LGD. It is important to note that EL (or, for that matter, credit quality) does not by itself constitute risk; if losses always equaled their expected levels, then there would be no uncertainty. Instead, EL should be viewed as an anticipated ââ¬Å"cost of doing businessâ⬠and should therefore be incorporated in loan pricing and ex ante provisioning. Credit risk, in fact, arises from variations in the actual loss levels, which give rise to the so-called unexpected loss (UL). Statistically speaking, UL is simply the standard deviation of EL. UL= ÃÆ' (EL) = ÃÆ' (PD*EAD*LGD) Once the bank- level credit loss distribution is constructed, credit economic capital is simply determined by the banks tolerance for credit risk, i.e. the bank needs to decide how much capital it wants to hold in order to avoid insolvency because of unexpected credit losses over the next year. A safer bank must have sufficient capital to withstand losses that are larger and rarer, i.e. they extend further out in the loss distribution tail. In practice, therefore, the choice of confidence interval in the loss distribution corresponds to the banks target credit rating (and related default probability) for its own debt. As Figure below shows, economic capital is the difference between EL and the selected confidence interval at the tail of the loss distribution; it is equal to a multiple K (often referred to as the capital multiplier) of the standard deviation of EL (i.e. UL). The shape of the loss distribution can vary considerably depending on product type and borrower credit quality. For example, high quality (low PD) borrowers tend to have proportionally less EL per unit of capital charged, meaning that K is higher and the shape of their loss distribution is more skewed (and vice versa). Credit risk may be in the following forms: * In case of the direct lending * In case of the guarantees and the letter of the credit * In case of the treasury operations * In case of the securities trading businesses * In case of the cross border exposure 2.2 The need for Credit Risk Rating: The need for Credit Risk Rating has arisen due to the following: 1. With dismantling of State control, deregulation, globalisation and allowing things to shape on the basis of market conditions, Indian Industry and Indian Banking face new risks and challenges. Competition results in the survival of the fittest. It is therefore necessary to identify these risks, measure them, monitor and control them. 2. It provides a basis for Credit Risk Pricing i.e. fixation of rate of interest on lending to different borrowers based on their credit risk rating thereby balancing Risk Reward for the Bank. 3. The Basel Accord and consequent Reserve Bank of India guidelines requires that the level of capital required to be maintained by the Bank will be in proportion to the risk of the loan in Banks Books for measurement of which proper Credit Risk Rating system is necessary. 4. The credit risk rating can be a Risk Management tool for prospecting fresh borrowers in addition to monitoring the weaker parameters and taking remedial action. The types of Risks Captured in the Banks Credit Risk Rating Model The Credit Risk Rating Model provides a framework to evaluate the risk emanating from following main risk categorizes/risk areas: * Industry risk * Business risk * Financial risk * Management risk * Facility risk * Project risk 2.3 WHY CREDIT RISK MEASUREMENT? In recent years, a revolution is brewing in risk as it is both managed and measured. There are seven reasons as to why certain surge in interest: 1. Structural increase in bankruptcies: Although the most recent recession hit at different time in different countries, most statistics show a significant increase in bankruptcies, compared to prior recession. To the extent that there has been a permanent or structural increase in bankruptcies worldwide- due to increase in the global competition- accurate credit analysis become even more important today than in past. 2. Disintermediation: As capital markets have expanded and become accessible to small and mid sized firms, the firms or borrowers ââ¬Å"left behindâ⬠to raise funds from banks and other traditional financial institutions (FIs) are likely to be smaller and to have weaker credit ratings. Capital market growth has produced ââ¬Å"a winnersâ⬠curse effect on the portfolios of traditional FIs. 3. More Competitive Margins: Almost paradoxically, despite the decline in the average quality of loans, interest margins or spreads, especially in wholesale loan markets have become very thin. In short, the risk-return trade off from lending has gotten worse. A number of reasons can be cited, but an important factor has been the enhanced competition for low quality borrowers especially from finance companies, much of whose lending activity has been concentrated at the higher risk/lower quality end of the market. 4. Declining and Volatile Values of Collateral: Concurrent with the recent Asian and Russian debt crisis in well developed countries such as Switzerland and Japan have shown that property and real assets value are very hard to predict, and to realize through liquidation. The weaker (and more uncertain) collateral values are, the riskier the lending is likely to be. Indeed the current concerns about deflation worldwide have been accentuated the concerns about the value of real assets such as property and other physical assets. 5. The Growth Of Off- Balance Sheet Derivatives: In many of the very large U.S. banks, the notional value of the off-balance-sheet exposure to instruments such as over-the-counter (OTC) swaps and forwards is more than 10 times the size of their loan books. Indeed the growth in credit risk off the balance sheet was one of the main reasons for the introduction, by the Bank for International Settlements (BIS), of risk based capital requirements in 1993. Under the BIS system, the banks have to hold a capital requirement based on the mark- to- market current values of each OTC Derivative contract plus an add on for potential future exposure. 6. Technology Advances in computer systems and related advances in information technology have given banks and FIs the opportunity to test high powered modeling techniques. A survey conducted by International Swaps and Derivatives Association and the Institute of International Finance in 2000 found that survey participants (consisting of 25 commercial banks from 10 countries, with varying size and specialties) used commercial and internal databases to assess the credit risk on rated and unrated commercial, retail and mortgage loans. 7. The BIS Risk-Based Capital Requirements Despite the importance of above six reasons, probably the greatest incentive for banks to develop new credit risk models has been dissatisfaction with the BIS and central banks post-1992 imposition of capital requirements on loans. The current BIS approach has been described as a ââ¬Ëone size fits all policy, irrespective of the size of loan, its maturity, and most importantly, the credit quality of the borrowing party. Much of the current interest in fine tuning credit risk measurement models has been fueled by the proposed BIS New Capital Accord (or so Called BIS II) which would more closely link capital charges to the credit risk exposure to retail, commercial, sovereign and interbank credits. Chapter- 3 Credit Risk Approaches and Pricing 3.1 CREDIT RISK MEASUREMENT APPROACHES: 1. CREDIT SCORING MODELS Credit Scoring Models use data on observed borrower characteristics to calculate the probability of default or to sort borrowers into different default risk classes. By selecting and combining different economic and financial borrower characteristics, a bank manager may be able to numerically establish which factors are important in explaining default risk, evaluate the relative degree or importance of these factors, improve the pricing of default risk, be better able to screen out bad loan applicants and be in a better position to calculate any reserve needed to meet expected future loan losses. To employ credit scoring model in this manner, the manager must identify objective economic and financial measures of risk for any particular class of borrower. For consumer debt, the objective characteristics in a credit -scoring model might include income, assets, age occupation and location. For corporate debt, financial ratios such as debt-equity ratio are usually key factors. After data are identified, a statistical technique quantifies or scores the default risk probability or default risk classification. Credit scoring models include three broad types: (1) linear probability models, (2) logit model and (3) linear discriminant model. LINEAR PROBABILITY MODEL: The linear probability model uses past data, such as accounting ratios, as inputs into a model to explain repayment experience on old loans. The relative importance of the factors used in explaining the past repayment performance then forecasts repayment probabilities on new loans; that is can be used for assessing the probability of repayment. Briefly we divide old loans (i) into two observational groups; those that defaulted (Zi = 1) and those that did not default (Zi = 0). Then we relate these observations by linear regression to s set of j casual variables (Xij) that reflects quantative information about the ith borrower, such as leverage or earnings. We estimate the model by linear regression of: Zi = à £Ã ²jXij + error Where à ²j is the estimated importance of the jth variable in explaining past repayment experience. If we then take these estimated à ²js and multiply them by the observed Xij for a prospective borrower, we can derive an expected value of Zi for the probability of repayment on the loan. LOGIT MODEL: The objective of the typical credit or loan review model is to replicate judgments made by loan officers, credit managers or bank examiners. If an accurate model could be developed, then it could be used as a tool for reviewing and classifying future credit risks. Chesser (1974) developed a model to predict noncompliance with the customers original loan arrangement, where non-compliance is defined to include not only default but any workout that may have been arranged resulting in a settlement of the loan less favorable to the tender than the original agreement. Chessers model, which was based on a technique called logit analysis, consisted of the following six variables. X1 = (Cash + Marketable Securities)/Total Assets X2 = Net Sales/(Cash + Marketable Securities) X3 = EBIT/Total Assets X4 = Total Debt/Total Assets X5 = Total Assets/ Net Worth X6 = Working Capital/Net Sales The estimated coefficients, including an intercept term, are Y = -2.0434 -5.24X1 + 0.0053X2 6.6507X3 + 4.4009X4 0.0791X5 0.1020X6 Chessers classification rule for above equation is If P> 50, assign to the non compliance group and If PâⰠ¤50, assign to the compliance group. LINEAR DISCRIMINANT MODEL: While linear probability and logit models project a value foe the expected probability of default if a loan is made, discriminant models divide borrowers into high or default risk classes contingent on their observed characteristic (X). Altmans Z-score model is an application of multivariate Discriminant analysis in credit risk modeling. Financial ratios measuring probability, liquidity and solvency appeared to have significant discriminating power to separate the firm that fails to service its debt from the firms that do not. These ratios are weighted to produce a measure (credit risk score) that can be used as a metric to differentiate the bad firms from the set of good ones. Discriminant analysis is a multivariate statistical technique that analyzes a set of variables in order to differentiate two or more groups by minimizing the within-group variance and maximizing the between group variance simultaneously. Variables taken were: X1::Working Capital/ Total Asset X2: Retained Earning/ Total Asset X3: Earning before interest and taxes/ Total Asset X4: Market value of equity/ Book value of total Liabilities X5: Sales/Total Asset The original Z-score model was revised and modified several times in order to find the scoring model more specific to a particular class of firm. These resulted in the private firms Z-score model, non manufacturers Z-score model and Emerging Market Scoring (EMS) model. 3.2 New Approaches TERM STRUCTURE DERIVATION OF CREDIT RISK: One market based method of assessing credit risk exposure and default probabilities is to analyze the risk premium inherent in the current structure of yields on corporate debt or loans to similar risk-rated borrowers. Rating agencies categorize corporate bond issuers into at least seven major classes according to perceived credit quality. The first four ratings AAA, AA, A and BBB indicate investment quality borrowers. MORTALITY RATE APPROACH: Rather than extracting expected default rates from the current term structure of interest rates, the FI manager may analyze the historic or past default experience the mortality rates, of bonds and loans of a similar quality. Here p1is the probability of a grade B bond surviving the first year of its issue; thus 1 p1 is the marginal mortality rate, or the probability of the bond or loan dying or defaulting in the first year while p2 is the probability of the loan surviving in the second year and that it has not defaulted in the first year, 1-p2 is the marginal mortality rate for the second year. Thus, for each grade of corporate buyer quality, a marginal mortality rate (MMR) curve can show the historical default rate in any specific quality class in each year after issue. RAROC MODELS: Based on a banks risk-bearing capacity and its risk strategy, it is thus necessary ââ¬â bearing in mind the banks strategic orientation ââ¬â to find a method for the efficient allocation of capital to the banks individual siness areas, i.e. to define indicators that are suitable for balancing risk and return in a sensible manner. Indicators fulfilling this requirement are often referred to as risk adjusted performance measures (RAPM). RARORAC (risk adjusted return on risk adjusted capital, usually abbreviated as the most commonly found forms are RORAC (return on risk adjusted capital), Net income is taken to mean income minus refinancing cost, operating cost, and expected losses. It should now be the banks goal to maximize a RAPM indicator for the bank as a whole, e.g. RORAC, taking into account the correlation between individual transactions. Certain constraints such as volume restrictions due to a potential lack of liquidity and the maintenance of solvency based on economic and regulatory capital have to be observed in reaching this goal. From an organizational point of view, value and risk management should therefore be linked as closely as possible at all organizational levels. OPTION MODELS OF DEFAULT RISK (kmv model): KMV Corporation has developed a credit risk model that uses information on the stock prices and the capital structure of the firm to estimate its default probability. The starting point of the model is the proposition that a firm will default only if its asset value falls below a certain level, which is function of its liability. It estimates the asset value of the firm and its asset volatility from the market value of equity and the debt structure in the option theoretic framework. The resultant probability is called Expected default Frequency (EDF). In summary, EDF is calculated in the following three steps: i) Estimation of asset value and volatility from the equity value and volatility of equity return. ii) Calculation of distance from default iii) Calculation of expected default frequency Credit METRICS: It provides a method for estimating the distribution of the value of the assets n a portfolio subject to change in the credit quality of individual borrower. A portfolio consists of different stand-alone assets, defined by a stream of future cash flows. Each asset has a distribution over the possible range of future rating class. Starting from its initial rating, an asset may end up in ay one of the possible rating categories. Each rating category has a different credit spread, which will be used to discount the future cash flows. Moreover, the assets are correlated among themselves depending on the industry they belong to. It is assumed that the asset returns are normally distributed and change in the asset returns causes the change in the rating category in future. Finally, the simulation technique is used to estimate the value distribution of the assets. A number of scenario are generated from a multivariate normal distribution, which is defined by the appropriate credit spread, t he future value of asset is estimated. CREDIT Risk+: CreditRisk+, introduced by Credit Suisse Financial Products (CSFP), is a model of default risk. Each asset has only two possible end-of-period states: default and non-default. In the event of default, the lender recovers a fixed proportion of the total expense. The default rate is considered as a continuous random variable. It does not try to estimate default correlation directly. Here, the default correlation is assumed to be determined by a set of risk factors. Conditional on these risk factors, default of each obligator follows a Bernoulli distribution. To get unconditional probability generating function for the number of defaults, it assumes that the risk factors are independently gamma distributed random variables. The final step in Creditrisk+ is to obtain the probability generating function for losses. Conditional on the number of default events, the losses are entirely determined by the exposure and recovery rate. Thus, the distribution of asset can be estimated from the fol lowing input data: i) Exposure of individual asset ii) Expected default rate iii) Default ate volatilities iv) Recovery rate given default 3.3 CREDIT PRICING Pricing of the credit is essential for the survival of enterprises relying on credit assets, because the benefits derived from extending credit should surpass the cost. With the introduction of capital adequacy norms, the credit risk is linked to the capital-minimum 8% capital adequacy. Consequently, higher capital is required to be deployed if more credit risks are underwritten. The decision (a) whether to maximize the returns on possible credit assets with the existing capital or (b) raise more capital to do more business invariably depends upon p Credit Risk Dissertation Credit Risk Dissertation CREDIT RISK EXECUTIVE SUMMARY The future of banking will undoubtedly rest on risk management dynamics. Only those banks that have efficient risk management system will survive in the market in the long run. The major cause of serious banking problems over the years continues to be directly related to lax credit standards for borrowers and counterparties, poor portfolio risk management, or a lack of attention to deterioration in the credit standing of a banks counterparties. Credit risk is the oldest and biggest risk that bank, by virtue of its very nature of business, inherits. This has however, acquired a greater significance in the recent past for various reasons. There have been many traditional approaches to measure credit risk like logit, linear probability model but with passage of time new approaches have been developed like the Credit+, KMV Model. Basel I Accord was introduced in 1988 to have a framework for regulatory capital for banks but the ââ¬Å"one size fit allâ⬠approach led to a shift, to a new and comprehensive approach -Basel II which adopts a three pillar approach to risk management. Banks use a number of techniques to mitigate the credit risks to which they are exposed. RBI has prescribed adoption of comprehensive approach for the purpose of CRM which allows fuller offset of security of collateral against exposures by effectively reducing the exposure amount by the value ascribed to the collateral. In this study, a leading nationalized bank is taken to study the steps taken by the bank to implement the Basel- II Accord and the entire framework developed for credit risk management. The bank under the study uses the credit scoring method to evaluate the credit risk involved in various loans/advances. The bank has set up special software to evaluate each case under various parameters and a monitoring system to continuously track each assets performance in accordance with the evaluation parameters. CHAPTER 1 INTRODUCTION 1.1 Rationale Credit Risk Management in todays deregulated market is a big challenge. Increased market volatility has brought with it the need for smart analysis and specialized applications in managing credit risk. A well defined policy framework is needed to help the operating staff identify the risk-event, assign a probability to each, quantify the likely loss, assess the acceptability of the exposure, price the risk and monitor them right to the point where they are paid off. Generally, Banks in India evaluate a proposal through the traditional tools of project financing, computing maximum permissible limits, assessing management capabilities and prescribing a ceiling for an industry exposure. As banks move in to a new high powered world of financial operations and trading, with new risks, the need is felt for more sophisticated and versatile instruments for risk assessment, monitoring and controlling risk exposures. It is, therefore, time that banks managements equip them fully to grapple with the demands of creating tools and systems capable of assessing, monitoring and controlling risk exposures in a more scientific manner. According to an estimate, Credit Risk takes about 70% and 30% remaining is shared between the other two primary risks, namely Market risk (change in the market price and operational risk i.e., failure of internal controls, etc.). Quality borrowers (Tier-I borrowers) were able to access the capital market directly without going through the debt route. Hence, the credit route is now more open to lesser mortals (Tier-II borrowers). With margin levels going down, banks are unable to absorb the level of loan losses. Even in banks which regularly fine-tune credit policies and streamline credit processes, it is a real challenge for credit risk managers to correctly identify pockets of risk concentration, quantify extent of risk carried, identify opportunities for diversification and balance the risk-return trade-off in their credit portfolio. The management of banks should strive to embrace the notion of ââ¬Ëuncertainty and risk in their balance sheet and instill the need for approaching credit administration from a ââ¬Ërisk-perspective across the system by placing well drafted strategies in the hands of the operating staff with due material support for its successful implementation. There is a need for Strategic approach to Credit Risk Management (CRM) in Indian Commercial Banks, particularly in view of; (1) Higher NPAs level in comparison with global benchmark (2) RBI s stipulation about dividend distribution by the banks (3) Revised NPAs level and CAR norms (4) New Basel Capital Accord (Basel -II) revolution 1.2 OBJECTIVES To understand the conceptual framework for credit risk. To understand credit risk under the Basel II Accord. To analyze the credit risk management practices in a Leading Nationalised Bank 1.3 RESEARCH METHODOLOGY Research Design: In order to have more comprehensive definition of the problem and to become familiar with the problems, an extensive literature survey was done to collect secondary data for the location of the various variables, probably contemporary issues and the clarity of concepts. Data Collection Techniques: The data collection technique used is interviewing. Data has been collected from both primary and secondary sources. Primary Data: is collected by making personal visits to the bank. Secondary Data: The details have been collected from research papers, working papers, white papers published by various agencies like ICRA, FICCI, IBA etc; articles from the internet and various journals. 1.4 LITERATURE REVIEW * Merton (1974) has applied options pricing model as a technology to evaluate the credit risk of enterprise, it has been drawn a lot of attention from western academic and business circles.Mertons Model is the theoretical foundation of structural models. Mertons model is not only based on a strict and comprehensive theory but also used market information stock price as an important variance toevaluate the credit risk.This makes credit risk to be a real-time monitored at a much higher frequency.This advantage has made it widely applied by the academic and business circle for a long time. Other Structural Models try to refine the original Merton Framework by removing one or more of unrealistic assumptions. * Black and Cox (1976) postulate that defaults occur as soon as firms asset value falls below a certain threshold. In contrast to the Merton approach, default can occur at any time. The paper by Black and Cox (1976) is the first of the so-called First Passage Models (FPM). First passage models specify default as the first time the firms asset value hits a lower barrier, allowing default to take place at any time. When the default barrier is exogenously fixed, as in Black and Cox (1976) and Longstaff and Schwartz (1995), it acts as a safety covenant to protect bondholders. Black and Cox introduce the possibility of more complex capital structures, with subordinated debt. * Geske (1977) introduces interest-paying debt to the Merton model. * Vasicek (1984) introduces the distinction between short and long term liabilities which now represents a distinctive feature of the KMV model. Under these models, all the relevant credit risk elements, including default and recovery at default, are a function of the structural characteristics of the firm: asset levels, asset volatility (business risk) and leverage (financial risk). * Kim, Ramaswamy and Sundaresan (1993) have suggested an alternative approach which still adopts the original Merton framework as far as the default process is concerned but, at the same time, removes one of the unrealistic assumptions of the Merton model; namely, that default can occur only at maturity of the debt when the firms assets are no longer sufficient to cover debt obligations. Instead, it is assumed that default may occur anytime between the issuance and maturity of the debt and that default is triggered when the value of the firms assets reaches a lower threshold level. In this model, the RR in the event of default is exogenous and independent from the firms asset value. It is generally defined as a fixed ratio of the outstanding debt value and is therefore independent from the PD. The attempt to overcome the shortcomings of structural-form models gave rise to reduced-form models. Unlike structural-form models, reduced-form models do not condition default on the value of the firm, and parameters related to the firms value need not be estimated to implement them. * Jarrow and Turnbull (1995) assumed that, at default, a bond would have a market value equal to an exogenously specified fraction of an otherwise equivalent default-free bond. * Duffie and Singleton (1999) followed with a model that, when market value at default (i.e. RR) is exogenously specified, allows for closed-form solutions for the term-structure of credit spreads. * Zhou (2001) attempt to combine the advantages of structural-form models a clear economic mechanism behind the default process, and the ones of reduced- form models unpredictability of default. This model links RRs to the firm value at default so that the variation in RRs is endogenously generated and the correlation between RRs and credit ratings reported first in Altman (1989) and Gupton, Gates and Carty (2000) is justified. Lately portfolio view on credit losses has emerged by recognising that changes in credit quality tend to comove over the business cycle and that one can diversify part of the credit risk by a clever composition of the loan portfolio across regions, industries and countries. Thus in order to assess the credit risk of a loan portfolio, a bank must not only investigate the creditworthiness of its customers, but also identify the concentration risks and possible comovements of risk factors in the portfolio. * CreditMetrics by Gupton et al (1997) was publicized in 1997 by JP Morgan. Its methodology is based on probability of moving from one credit quality to another within a given time horizon (credit migration analysis). The estimation of the portfolio Value-at-Risk due to Credit (Credit-VaR) through CreditMetrics A rating system with probabilities of migrating from one credit quality to another over a given time horizon (transition matrix) is the key component of the credit-VaR proposed by JP Morgan. The specified credit risk horizon is usually one year. A rating system with probabilities of migrating from one credit quality to another over a given time horizon (transition matrix) is the key component of the credit-VaR proposed by JP Morgan. The specified credit risk horizon is usually one year. * (Sy, 2007), states that the primary cause of credit default is loan delinquency due to insufficient liquidity or cash flow to service debt obligations. In the case of unsecured loans, we assume delinquency is a necessary and sufficient condition. In the case of collateralized loans, delinquency is a necessary, but not sufficient condition, because the borrower may be able to refinance the loan from positive equity or net assets to prevent default. In general, for secured loans, both delinquency and insolvency are assumed necessary and sufficient for credit default. CHAPTER 2 THEORECTICAL FRAMEWORK 2.1 CREDIT RISK: Credit risk is risk due to uncertainty in a counterpartys (also called an obligors or credits) ability to meet its obligations. Because there are many types of counterpartiesââ¬âfrom individuals to sovereign governmentsââ¬âand many different types of obligationsââ¬âfrom auto loans to derivatives transactionsââ¬âcredit risk takes many forms. Institutions manage it in different ways. Although credit losses naturally fluctuate over time and with economic conditions, there is (ceteris paribus) a statistically measured, long-run average loss level. The losses can be divided into two categories i.e. expected losses (EL) and unexpected losses (UL). EL is based on three parameters: à ·Ã¢â ¬Ã The likelihood that default will take place over a specified time horizon (probability of default or PD) à · â⠬à The amount owned by the counterparty at the moment of default (exposure at default or EAD) à ·Ã¢â ¬Ã The fraction of the exposure, net of any recoveries, which will be lost following a default event (loss given default or LGD). EL = PD x EAD x LGD EL can be aggregated at various different levels (e.g. individual loan or entire credit portfolio), although it is typically calculated at the transaction level; it is normally mentioned either as an absolute amount or as a percentage of transaction size. It is also both customer- and facility-specific, since two different loans to the same customer can have a very different EL due to differences in EAD and/or LGD. It is important to note that EL (or, for that matter, credit quality) does not by itself constitute risk; if losses always equaled their expected levels, then there would be no uncertainty. Instead, EL should be viewed as an anticipated ââ¬Å"cost of doing businessâ⬠and should therefore be incorporated in loan pricing and ex ante provisioning. Credit risk, in fact, arises from variations in the actual loss levels, which give rise to the so-called unexpected loss (UL). Statistically speaking, UL is simply the standard deviation of EL. UL= ÃÆ' (EL) = ÃÆ' (PD*EAD*LGD) Once the bank- level credit loss distribution is constructed, credit economic capital is simply determined by the banks tolerance for credit risk, i.e. the bank needs to decide how much capital it wants to hold in order to avoid insolvency because of unexpected credit losses over the next year. A safer bank must have sufficient capital to withstand losses that are larger and rarer, i.e. they extend further out in the loss distribution tail. In practice, therefore, the choice of confidence interval in the loss distribution corresponds to the banks target credit rating (and related default probability) for its own debt. As Figure below shows, economic capital is the difference between EL and the selected confidence interval at the tail of the loss distribution; it is equal to a multiple K (often referred to as the capital multiplier) of the standard deviation of EL (i.e. UL). The shape of the loss distribution can vary considerably depending on product type and borrower credit quality. For example, high quality (low PD) borrowers tend to have proportionally less EL per unit of capital charged, meaning that K is higher and the shape of their loss distribution is more skewed (and vice versa). Credit risk may be in the following forms: * In case of the direct lending * In case of the guarantees and the letter of the credit * In case of the treasury operations * In case of the securities trading businesses * In case of the cross border exposure 2.2 The need for Credit Risk Rating: The need for Credit Risk Rating has arisen due to the following: 1. With dismantling of State control, deregulation, globalisation and allowing things to shape on the basis of market conditions, Indian Industry and Indian Banking face new risks and challenges. Competition results in the survival of the fittest. It is therefore necessary to identify these risks, measure them, monitor and control them. 2. It provides a basis for Credit Risk Pricing i.e. fixation of rate of interest on lending to different borrowers based on their credit risk rating thereby balancing Risk Reward for the Bank. 3. The Basel Accord and consequent Reserve Bank of India guidelines requires that the level of capital required to be maintained by the Bank will be in proportion to the risk of the loan in Banks Books for measurement of which proper Credit Risk Rating system is necessary. 4. The credit risk rating can be a Risk Management tool for prospecting fresh borrowers in addition to monitoring the weaker parameters and taking remedial action. The types of Risks Captured in the Banks Credit Risk Rating Model The Credit Risk Rating Model provides a framework to evaluate the risk emanating from following main risk categorizes/risk areas: * Industry risk * Business risk * Financial risk * Management risk * Facility risk * Project risk 2.3 WHY CREDIT RISK MEASUREMENT? In recent years, a revolution is brewing in risk as it is both managed and measured. There are seven reasons as to why certain surge in interest: 1. Structural increase in bankruptcies: Although the most recent recession hit at different time in different countries, most statistics show a significant increase in bankruptcies, compared to prior recession. To the extent that there has been a permanent or structural increase in bankruptcies worldwide- due to increase in the global competition- accurate credit analysis become even more important today than in past. 2. Disintermediation: As capital markets have expanded and become accessible to small and mid sized firms, the firms or borrowers ââ¬Å"left behindâ⬠to raise funds from banks and other traditional financial institutions (FIs) are likely to be smaller and to have weaker credit ratings. Capital market growth has produced ââ¬Å"a winnersâ⬠curse effect on the portfolios of traditional FIs. 3. More Competitive Margins: Almost paradoxically, despite the decline in the average quality of loans, interest margins or spreads, especially in wholesale loan markets have become very thin. In short, the risk-return trade off from lending has gotten worse. A number of reasons can be cited, but an important factor has been the enhanced competition for low quality borrowers especially from finance companies, much of whose lending activity has been concentrated at the higher risk/lower quality end of the market. 4. Declining and Volatile Values of Collateral: Concurrent with the recent Asian and Russian debt crisis in well developed countries such as Switzerland and Japan have shown that property and real assets value are very hard to predict, and to realize through liquidation. The weaker (and more uncertain) collateral values are, the riskier the lending is likely to be. Indeed the current concerns about deflation worldwide have been accentuated the concerns about the value of real assets such as property and other physical assets. 5. The Growth Of Off- Balance Sheet Derivatives: In many of the very large U.S. banks, the notional value of the off-balance-sheet exposure to instruments such as over-the-counter (OTC) swaps and forwards is more than 10 times the size of their loan books. Indeed the growth in credit risk off the balance sheet was one of the main reasons for the introduction, by the Bank for International Settlements (BIS), of risk based capital requirements in 1993. Under the BIS system, the banks have to hold a capital requirement based on the mark- to- market current values of each OTC Derivative contract plus an add on for potential future exposure. 6. Technology Advances in computer systems and related advances in information technology have given banks and FIs the opportunity to test high powered modeling techniques. A survey conducted by International Swaps and Derivatives Association and the Institute of International Finance in 2000 found that survey participants (consisting of 25 commercial banks from 10 countries, with varying size and specialties) used commercial and internal databases to assess the credit risk on rated and unrated commercial, retail and mortgage loans. 7. The BIS Risk-Based Capital Requirements Despite the importance of above six reasons, probably the greatest incentive for banks to develop new credit risk models has been dissatisfaction with the BIS and central banks post-1992 imposition of capital requirements on loans. The current BIS approach has been described as a ââ¬Ëone size fits all policy, irrespective of the size of loan, its maturity, and most importantly, the credit quality of the borrowing party. Much of the current interest in fine tuning credit risk measurement models has been fueled by the proposed BIS New Capital Accord (or so Called BIS II) which would more closely link capital charges to the credit risk exposure to retail, commercial, sovereign and interbank credits. Chapter- 3 Credit Risk Approaches and Pricing 3.1 CREDIT RISK MEASUREMENT APPROACHES: 1. CREDIT SCORING MODELS Credit Scoring Models use data on observed borrower characteristics to calculate the probability of default or to sort borrowers into different default risk classes. By selecting and combining different economic and financial borrower characteristics, a bank manager may be able to numerically establish which factors are important in explaining default risk, evaluate the relative degree or importance of these factors, improve the pricing of default risk, be better able to screen out bad loan applicants and be in a better position to calculate any reserve needed to meet expected future loan losses. To employ credit scoring model in this manner, the manager must identify objective economic and financial measures of risk for any particular class of borrower. For consumer debt, the objective characteristics in a credit -scoring model might include income, assets, age occupation and location. For corporate debt, financial ratios such as debt-equity ratio are usually key factors. After data are identified, a statistical technique quantifies or scores the default risk probability or default risk classification. Credit scoring models include three broad types: (1) linear probability models, (2) logit model and (3) linear discriminant model. LINEAR PROBABILITY MODEL: The linear probability model uses past data, such as accounting ratios, as inputs into a model to explain repayment experience on old loans. The relative importance of the factors used in explaining the past repayment performance then forecasts repayment probabilities on new loans; that is can be used for assessing the probability of repayment. Briefly we divide old loans (i) into two observational groups; those that defaulted (Zi = 1) and those that did not default (Zi = 0). Then we relate these observations by linear regression to s set of j casual variables (Xij) that reflects quantative information about the ith borrower, such as leverage or earnings. We estimate the model by linear regression of: Zi = à £Ã ²jXij + error Where à ²j is the estimated importance of the jth variable in explaining past repayment experience. If we then take these estimated à ²js and multiply them by the observed Xij for a prospective borrower, we can derive an expected value of Zi for the probability of repayment on the loan. LOGIT MODEL: The objective of the typical credit or loan review model is to replicate judgments made by loan officers, credit managers or bank examiners. If an accurate model could be developed, then it could be used as a tool for reviewing and classifying future credit risks. Chesser (1974) developed a model to predict noncompliance with the customers original loan arrangement, where non-compliance is defined to include not only default but any workout that may have been arranged resulting in a settlement of the loan less favorable to the tender than the original agreement. Chessers model, which was based on a technique called logit analysis, consisted of the following six variables. X1 = (Cash + Marketable Securities)/Total Assets X2 = Net Sales/(Cash + Marketable Securities) X3 = EBIT/Total Assets X4 = Total Debt/Total Assets X5 = Total Assets/ Net Worth X6 = Working Capital/Net Sales The estimated coefficients, including an intercept term, are Y = -2.0434 -5.24X1 + 0.0053X2 6.6507X3 + 4.4009X4 0.0791X5 0.1020X6 Chessers classification rule for above equation is If P> 50, assign to the non compliance group and If PâⰠ¤50, assign to the compliance group. LINEAR DISCRIMINANT MODEL: While linear probability and logit models project a value foe the expected probability of default if a loan is made, discriminant models divide borrowers into high or default risk classes contingent on their observed characteristic (X). Altmans Z-score model is an application of multivariate Discriminant analysis in credit risk modeling. Financial ratios measuring probability, liquidity and solvency appeared to have significant discriminating power to separate the firm that fails to service its debt from the firms that do not. These ratios are weighted to produce a measure (credit risk score) that can be used as a metric to differentiate the bad firms from the set of good ones. Discriminant analysis is a multivariate statistical technique that analyzes a set of variables in order to differentiate two or more groups by minimizing the within-group variance and maximizing the between group variance simultaneously. Variables taken were: X1::Working Capital/ Total Asset X2: Retained Earning/ Total Asset X3: Earning before interest and taxes/ Total Asset X4: Market value of equity/ Book value of total Liabilities X5: Sales/Total Asset The original Z-score model was revised and modified several times in order to find the scoring model more specific to a particular class of firm. These resulted in the private firms Z-score model, non manufacturers Z-score model and Emerging Market Scoring (EMS) model. 3.2 New Approaches TERM STRUCTURE DERIVATION OF CREDIT RISK: One market based method of assessing credit risk exposure and default probabilities is to analyze the risk premium inherent in the current structure of yields on corporate debt or loans to similar risk-rated borrowers. Rating agencies categorize corporate bond issuers into at least seven major classes according to perceived credit quality. The first four ratings AAA, AA, A and BBB indicate investment quality borrowers. MORTALITY RATE APPROACH: Rather than extracting expected default rates from the current term structure of interest rates, the FI manager may analyze the historic or past default experience the mortality rates, of bonds and loans of a similar quality. Here p1is the probability of a grade B bond surviving the first year of its issue; thus 1 p1 is the marginal mortality rate, or the probability of the bond or loan dying or defaulting in the first year while p2 is the probability of the loan surviving in the second year and that it has not defaulted in the first year, 1-p2 is the marginal mortality rate for the second year. Thus, for each grade of corporate buyer quality, a marginal mortality rate (MMR) curve can show the historical default rate in any specific quality class in each year after issue. RAROC MODELS: Based on a banks risk-bearing capacity and its risk strategy, it is thus necessary ââ¬â bearing in mind the banks strategic orientation ââ¬â to find a method for the efficient allocation of capital to the banks individual siness areas, i.e. to define indicators that are suitable for balancing risk and return in a sensible manner. Indicators fulfilling this requirement are often referred to as risk adjusted performance measures (RAPM). RARORAC (risk adjusted return on risk adjusted capital, usually abbreviated as the most commonly found forms are RORAC (return on risk adjusted capital), Net income is taken to mean income minus refinancing cost, operating cost, and expected losses. It should now be the banks goal to maximize a RAPM indicator for the bank as a whole, e.g. RORAC, taking into account the correlation between individual transactions. Certain constraints such as volume restrictions due to a potential lack of liquidity and the maintenance of solvency based on economic and regulatory capital have to be observed in reaching this goal. From an organizational point of view, value and risk management should therefore be linked as closely as possible at all organizational levels. OPTION MODELS OF DEFAULT RISK (kmv model): KMV Corporation has developed a credit risk model that uses information on the stock prices and the capital structure of the firm to estimate its default probability. The starting point of the model is the proposition that a firm will default only if its asset value falls below a certain level, which is function of its liability. It estimates the asset value of the firm and its asset volatility from the market value of equity and the debt structure in the option theoretic framework. The resultant probability is called Expected default Frequency (EDF). In summary, EDF is calculated in the following three steps: i) Estimation of asset value and volatility from the equity value and volatility of equity return. ii) Calculation of distance from default iii) Calculation of expected default frequency Credit METRICS: It provides a method for estimating the distribution of the value of the assets n a portfolio subject to change in the credit quality of individual borrower. A portfolio consists of different stand-alone assets, defined by a stream of future cash flows. Each asset has a distribution over the possible range of future rating class. Starting from its initial rating, an asset may end up in ay one of the possible rating categories. Each rating category has a different credit spread, which will be used to discount the future cash flows. Moreover, the assets are correlated among themselves depending on the industry they belong to. It is assumed that the asset returns are normally distributed and change in the asset returns causes the change in the rating category in future. Finally, the simulation technique is used to estimate the value distribution of the assets. A number of scenario are generated from a multivariate normal distribution, which is defined by the appropriate credit spread, t he future value of asset is estimated. CREDIT Risk+: CreditRisk+, introduced by Credit Suisse Financial Products (CSFP), is a model of default risk. Each asset has only two possible end-of-period states: default and non-default. In the event of default, the lender recovers a fixed proportion of the total expense. The default rate is considered as a continuous random variable. It does not try to estimate default correlation directly. Here, the default correlation is assumed to be determined by a set of risk factors. Conditional on these risk factors, default of each obligator follows a Bernoulli distribution. To get unconditional probability generating function for the number of defaults, it assumes that the risk factors are independently gamma distributed random variables. The final step in Creditrisk+ is to obtain the probability generating function for losses. Conditional on the number of default events, the losses are entirely determined by the exposure and recovery rate. Thus, the distribution of asset can be estimated from the fol lowing input data: i) Exposure of individual asset ii) Expected default rate iii) Default ate volatilities iv) Recovery rate given default 3.3 CREDIT PRICING Pricing of the credit is essential for the survival of enterprises relying on credit assets, because the benefits derived from extending credit should surpass the cost. With the introduction of capital adequacy norms, the credit risk is linked to the capital-minimum 8% capital adequacy. Consequently, higher capital is required to be deployed if more credit risks are underwritten. The decision (a) whether to maximize the returns on possible credit assets with the existing capital or (b) raise more capital to do more business invariably depends upon p
Subscribe to:
Posts (Atom)