Wednesday, July 31, 2019

Indian Bpos Waking Up in the Philippines Opportunity Essay

1. How has the global economic downturn, discussed in the opening profile and throughout this chapter, impacted jobs outsourcing in the BPO industry? The global economic downturn has impacted jobs outsourcing in the BPO industry as it has helped to be one of the largest job creators in India. Due to this, many companies had to increase their operations output and therefore employ more and more people to keep up with the expanding trend. One of the major impacts, being growth and maturity, had an effect on the BPO industry and contributed to the outsourcing companies and continued growth of the industry. The IT and BPO outsourcing boom created a huge impact in the Indian economy and it increased the IT salary, cost of living, real-estate price and eventually it increased the outsource cost for many companies. Job outsourcing seemed to be the only viable option as global economic downturn had created many problems when it came to funding and financial institutions. There needed to be cut backs in order to keep things afloat and most companies would see that job outsourcing would solve at least one of the many problems involved in the economic downturn. Not only does it mean that it will save companies in the US and UK money, by offloading jobs abroad for cheap labor, it also means that it’s giving countries like India and Philippines the opportunity that is needed to secure a faster growing economy and generating much needed jobs. The global economic downturn provided merges and many acquisitions for the BPO industry and helped to create certain flexibilities for some companies which helped with resource management. The main reason why the economic downturn has impacted on job outsourcing in a major way is because it is cost effective and helps companies concentrate on core areas. It also contributes to improvement in productivity 2. referring to this chapter and this case, discuss the general trends in the globalization of human capital. 3. What are the effects of the Indian government policies on the Indian BPO industry and on MNC decisions regarding locations for outsourcing jobs  4. How does this case highlight the threats and opportunities facing global companies in developing their strategies?

Tuesday, July 30, 2019

Belonging Essay

Belonging, in essence, refers to the notion associated with the connections individuals make with people, groups and places. Conversely, by belonging to a certain group or place others are indirectly excluded from belonging in the process. Belonging is a connection that we all, as humans, instinctively seek out; it forms part of our natural behavior. Through this process of belonging we ultimately conform and grow as a person, the outcome through which is our place in society is established. The Immigrant Chronicle† by Peter Skrzynecki illustrates how difficult finding a sense of belonging can be by raising the issues experienced when attempting to assimilate in a new cultural environment with all the associated physical, personal and social changes. Similarly, Jeffery Smart’s painting â€Å"The New School† and the short film ‘Mr Cheng’ explore how barriers can hinder our sense of acceptance and belonging. Though it is an innate need to belong it is not always achieved. This idea is accentuated throughout â€Å"In the Folk Museum† as the composer struggles to relate to a history and culture which is not his own. The visit highlights his inner conflict of not knowing where he belongs. He does not feel like a ‘true’ Australian who may look at such ‘relics’ and see cultural significance to them and understand their historical value. Rather he views them in a detached sense â€Å"To remind of a past/ Which isn’t mine†. The poet uses a faceless caretaker as a representative of Australia’s past. She sits next to a â€Å"winnowing machine† an agricultural machine that separates grain from chaff, creating a metaphor for separating the ‘true’ Australians from new migrants. The poet illustrates the caretaker as dull and uninviting, matching her hair colour with the grey clay bottle that is in the museum; causing them to appear be made from the same entity adding to the composers discomfort and estrangement. The composer emphasises his lack of belonging by describing the colour of the museum as well as its â€Å"cold as water† touch showing the disconnection and isolation the composer experiences. The poem reinforces this idea, when the composer is asked to sign the visitor’s books emphasising that he is only a viewer of the Australian history rather than a part of it. Likewise â€Å"St Patrick’s College† also portrays the need to belong; however, it reveals that belonging does not always come naturally despite his mothers attempt to find a way to connect through uniform and the schools reputation. The poet explores this attempt to belong in the third stanza by the stress of â€Å"eight years† passing by and yet he is still â€Å"Like a foreign tourist, Uncertain of my destination, Every time I got off. † The poet reinforces this idea again by the repetition of time in the start of the fourth stanza, establishing that no matter how long, he is still not able to belong. Despite the use of uniform, it is evident that it is only a facade, used in order to create an illusion of belonging. It is not the uniform that binds students together, but rather a unique connection shared with individuals and place. (Link to question here and back to thesis). Jeffery Smart’s painting â€Å"The New School† establishes that although belonging is an innate need, it is not always achieved. Smart is described as a social commentator, a witness to the alienated city dweller or worker in a dehumanized landscape. Elements of his paintings are taken from real places but they are modified and generalised. The effect is that he creates universal scenes which could be renderings of any large modern city rather than being anchored in Australia or Europe. The concept of isolation is shown through the individual’s body language and placement of the hands as well as positioning in the painting. She is distant from the other students, as well as the school, showing she does not fit, emphasising her disconnection and lack of belonging. Jeffery Smart also creates a negative feel through the dark colour of the sky and the sadness the girl displays through her facial expression, showing the difficultly to belong and how the individual feels isolation as she has no connection with these people place or groups. Similarly â€Å"In the Folk Museum†, the composer feels alienated not being able to relate to the Australian culture and history. The poem initiates feelings of isolation and disconnection as the poet questions himself and his place in society as he does not experience a link with the history viewed. Both â€Å"St Patrick’s college† and â€Å"The New School† relate to a place and a lack of belonging. Smart furthermore highlights the struggle to belong by the vectors in his painting. This is shown by the lines of the basketball court, fading as they progressively near the individual, emphasising the student’s estrangement from the school. In the same way Mr Cheng experiences a lack of belonging through the vectors employed by the director, as images of his family appear however they fade emphasising his uncertainty of where he belongs. The director revisits the idea of isolation, illustrated through the severed family connections that Mr Cheng has suffered. Mr Cheng’s alienation is echoed throughout the piece as the director emphasises his exclusion throughout the film through Mr Cheng’s portrayal as being secluded from society. This struggle is shown by the projections of his memories on a brick wall throughout the film, creating a metaphor, representing the wall as a barrier, showing that although he possesses these memories he is blocked from accessing his true identity. Peter Skrzynecki’s, ‘The Immigrant Chronicle’, allows one to see the difficulties the poet, as a second generation migrant; experiences, this being, the dichotomy of belonging to a culture which is not his own and the feeling of estrangement from his parents’ culture. This concept is also shown through Jeffery Smart’s painting ‘The New School’ as the individual struggles similar to that of Mr Cheng to relate to an environment which is unfamiliar.

Utilitarianism, Deontology and Virtue

Anthony B. FieldingUtilitarian, Deontological, and Virtue ethics The concept of utilitarianism is closely related to the philosophy of consequentialism. Basically this means that the moral and ethical value of a person’s action should be judged by the consequence of that action. Utilitarianism is believed to be the most important of the three ethical theories because it has helped shape our world’s politics, economics, and public policy. This ethical theory explains to us that we can determine the ethical significance by judging the consequence of that act.So basically I understand utilitarianism as; what is good for the majority is just and that happiness is the ultimate goal. An example of my own would be; the utilitarian would say that if six people were trapped on an island, two adult males and four small children with enough food to last two days if they all ate, but if the men did not eat, there would be enough for the children to eat for a week, it is ok for two men to starve to death if it meant that four children would live long enough to be rescued.Bound by our duties, walk the walk, and practice what you preach, the deontologist’s view of ethics. Contrary to utilitarianism, deontology says that there are some things that we should or should not do regardless of the consequence. Law enforcement officers wear a badge of honor ‘To serve and protect. ’ This motto is definitely one that describes deontology. Another popular slogan we hear is â€Å"Be all you can be. † This duty is demonstrated by our U. S. Army. A bodyguard’s duty is to protect his employer to the extent of endangering his own life.Virtue ethics questions how we should live our lives. A person is judged by his character and not by the actions he may uncommonly choose. Character building takes a lot of work. Character is introduced to us at a very young age by many sources, parents, grand-parents and teachers just to name a few. To me, virtue ethics can be confused as ‘do as I say and not as I do. ’ A person that practices virtue ethics may work for a company that sells automatic machine guns, but teach in his home that guns are bad.

Monday, July 29, 2019

H-D Strategic Audit Essay Example | Topics and Well Written Essays - 2250 words

H-D Strategic Audit - Essay Example All along the report takes into account the numerical figures of various parties & entities involved with Harley Davidson. It is a matter of simplicity that the prime factor that would govern the success of any motor company such as Harley Davidson would be the quality of the bikes produced. Quality is a single determinant of a company, which is why most major and successful corporations spend so much money in research & development in order to improve the quality of the product. The results of the CAD system at HD (Harley Davidson) can be seen in this regard, with the breakeven achieved by the sales of just 35,000 units in 1986 from the earlier 53,000. This is a major factor for the company's survival during the 80s in that it has succeeded in reducing the number of defects in its products by stepping up development activities. One of the most important factors of any company is the financial assets & liabilities of the company. IF we look at the consolidated figures of HD (Exhibit 5 B), we will find that the while the net identifiable assets of HD have risen by 47%, the corresponding figures for the depreciation and the Capital expenditures has been 35% and 75% respectively within a gap of 2 years in the recent past (between 1999 and 2001). The major cause of concern in this regard is therefore, the rise in the net capital expenditures, which is a point that requires thought.The motorcycle market generally comprises four main segments: Standard Performance Touring Custom. In spite of having these four different types of markets, the company has focused its activities on just two areas namely- touring and custom. Generally, it is expected that a company would like to try its hand out in all possible domains, but what remains surprising is as to why HD has limited to these two sectors over all these years. Custom-built bikes are the dream of any individual- be it young or old. But in spite of this, most companies including HD continue to charge exorbitant amounts for building custom made bikes. The figures are much more significant for HD, which charges around 50% more than its nearest competitors in this sector. This has been the reason for the mere 15% share of the company in this area on a global basis.Though waiting times have reduced drastically, both for the customers as well as the dealers, it has not come to a zero level as of now. This according to industry gurus is one area of HD, where the company should shed its complacent mindset (assuming that the customer will never turn away). The company must therefore, devise strategies to reduce this time gap further, which could eventually lead to higher sales and will enable HD to capture a larger share in the market and enable it to tighten the noose on its competitors.But, in order for HD to be able to make any progress in this regard, it would have to revise and revamp its existing demand & supply chain, with respect to its suppliers of raw materials and components. We sincerely recommend HD not to resort to 'channel stuff' its dealers as the sales are dying sown, and must instead devise alternate strategies. The dealers in this regard concern another problem with the selling of bikes at a premium, with customers having to pay up to $4000-5000 in

Sunday, July 28, 2019

Management, Work and Society Essay Example | Topics and Well Written Essays - 3750 words

Management, Work and Society - Essay Example It also gives an idea of about when and how the recognition and measurement of intangibles assets are done in a business corporation. Further it also elaborates the condition needed to be an intangible asset. The paper thoroughly describes elements associated with intangible assets like why corporations purchase or internally develops them and what are the examples of intangible assets. Furthermore the paper describes value of intangible assets for a business corporation. Nowadays it is a very sensitive area regarding accounting of intangible assets in a business combination. Further it also shows the importance intangibles assets held by the acquiree’s business while going for business combination. It also describes the growing importance of the various types of intangible assets like human resources, technology & etc. It also helps us to understand the basis of valuation of an intangible asset in the context of sports players in corporate teams. It highlights the trend of bu ying sports teams by the corporate houses and naming them after their name. These corporate houses treat these sports teams and their players as intangible and their valuation is major concern for them (Cohen 2011). Intangible assets Intangible assets can be defined as non-monetary assets which cannot be seen, touched or measured physically. These are identified as separate assets and are created through time or efforts. Hand and Lev (2003) has stated that intangibles can be identified in two basic forms viz. legal intangibles and competitive intangibles. Legal intangibles comprises of copyrights, patents, trademarks whereas competitive intangibles comprises of various activities related to the acquisition of knowledge , various collaboration activities, leverage activities, various structural activities, human capital, competitive advantage etc. Legal tangibles are generally called as intellectual property and the one who possess these assets have the legal right to defend these as sets in the court of law. On the other hand competitive intangible assets cannot be owned legally but are of great importance. It has a direct influence on the effectiveness, productivity, costs, revenue, customer satisfaction, market value and overall performance of an organization (Hand and Lev 2003). Intangible assets can also be categorized as the one that are being included for the accounting purposes and are included in the balance sheet of the companies. Such intangible assets include licenses and patents, purchased goodwill and capitalized R&D costs. The other category includes the intangible assets that are not being considered while accounting and are considered assets from the economics perspective. Various intangible assets that are generally excluded under accounting rules includes staff training, brand value, the development of IT systems, and customer networks. As per IAS 38 an intangible asset other than goodwill can be defined as non-monetary asset which do not have any physical substance. An asset can be recognized as intangible asset only if it is expected to yield future economic benefits and

Saturday, July 27, 2019

Aristotle Research Paper Example | Topics and Well Written Essays - 2500 words - 1

Aristotle - Research Paper Example However, others have agreed with both theories stating that nature provides the individuals with these traits while nurture serves to mould and develop these traits through maturity and learning. The influence of heredity and the environment is however evident as in many individuals. The genetic make of an individual is derived from his parents and this is due to heredity. This has a lot of influence on the behavior of an individual as these traits have been reported by researchers to influence intelligence, personality, sexual orientation and aggression (Ceci & Williams, 123). These traits are encoded in an individual’s DNA and hence are inherited by the offspring’s. Physical appearances of various individuals like color of the skin, eye, and height among other things have confirmed this and hence there is a possibility that nature plays a very important role in influencing the way we behave. For example if fraternal twins that are reared under the same conditions will never behave the same way as they posses’ different traits from their parents. Nurture on the other hand is also very important in determining our behaviors as these traits are just in the body but they have to be developed in order to fully come up and influence the behavior of an individual. ... This is only possible when such individuals practice how to be intelligent and creative and the type of the environment in which they are also contributes a lot. It is said then he was ‘’nurtured’’ by certain people (Ceci & Williams, 134). An example here is to consider identical twins brought up under different conditions will never behave like each other. The persistent of nature vs. nurture debate has continued long time for several centuries due to certain underlying issues. One of the most important characteristics of this issue is that there are several issues that are knitted together by ambiguity and also uncertainty into an issue that is very difficult to solve (Ceci & Williams, 147). This makes the people in the debate unable to put their focus in one or a single defined meaningful question. Another issue rises from genetic language itself we need to differentiate the meaning of nature and nurture and what most scientists call contributions of the two terms. Sometimes the difference is that nature is about what is inside while nurture is what we acquire from the environments that we interact with. Contributions here means that the impact of either nature or nurture on the behaviors of individuals. The controversy here is that some people believe that what is inborn contribute or determines what an individual will be. Nature bases its argument from the genes while nurture argument is based on environment (Ceci & Williams 137). There is need therefore two define very well the meaning of gene and environment as they are the key issues in this argument. We need to understand what the gene does and what the environment does as far as behavior is concerned. These two issues are the central of this debate. Aristotle’s argument in the

Friday, July 26, 2019

Methods for Increasing Employee Motivation Research Proposal

Methods for Increasing Employee Motivation - Research Proposal Example The discussion will attempt to address the primary question framed as follows: What programs may be suitable to increase employee productivity and revenue? Taking into account that General Trading has expanded across global boundaries. This problem regarding employee productivity, if remains unsolved may damage the reputation of the company, which has built over the years. It may even lead to compromise the efficiency of the company, which may lead to increased customer dissatisfaction. So a proper analysis of the situation and probable solutions to increase the employee productivity is very vital in this scenario. General Trading is a wholesale food distributor organization with clients abroad to whom they export their product offerings across the borders. For them, maintaining a healthy motivation level among the employees and thereby increasing the productivity is very crucial to the development and flourishing of the company. In order to increase the motivation level among the em ployees, an employee incentive program is very much essential for the company. In today’s world, employees are the biggest assets to an organization. Companies are paying competitive salaries to the employees as compared to other companies, in order to keep them motivated, as well as to retain them. Most of them have already moved towards an ‘Employee Incentive Program’, which is itself a part of performance appraisal. It has also been found out in a study that employee engagement which mainly comprises of commitment to work, as well as employee’s job satisfaction with the company increases through a host of performance appraisal system that happens in a company from time to time. (Scott, â€Å"Introduction†). Employee Incentive Programs has been employed by many companies and is considered as a highly valuable tool to keep the employees in an organization motivated by offering various kinds of incentives on the basis of their performance in the org anization or in similar grounds. Implementing an employee incentive program in the organization has also lead to solutions that exists between the company’s top management and its employees on the production front. It also leads to identification of the most efficient and effective employees in the organization, who can further be moved up in the hierarchy and can be groomed for possible leadership roles in the organization. As a result, it leads to identification of talent within the organization without hunting for the right candidate in the open market.

Thursday, July 25, 2019

Who Are The Innocents The Psychology Of Confessions Essay

Who Are The Innocents The Psychology Of Confessions - Essay Example A recent article (Kassin 2005) on the psychology of confessions, for example, suggests that video taping should be mandatory, but this proposal will focus on who innocents are, avoiding similar modalities. Therefore we will define innocence as a legal state and, remembering the legal maxim "innocent until proven guilty," innocents as those who are not guilty (Blackstone 1765). An study to measure why innocents confess that we will propose will be empirical, following an experiment closely resembling that carried out by Kassin and Kiechel (Kassin and Kiechel 1996), using participants testimony. The participants will carry out an experiment that contravenes the maxim "innocent until proven guilty" because we can show that the application of psychology to innocence is not relevant if innocent people can think themselves guilty as a result of Kassin and Kiechel's experiment. These psychologists' results are expected to be repeated. Kassin and Kiechel interestingly define features of innocents' false confessions as 'confabulated' and 'internalised' - interesting because these same words are used by memory research into false memory. Kopelman describes the varieties of false memory as "spontaneous confabulation in brain disease, false recognition cases, delusional memories and other delusions in psychosis, "confabulations" in schizophrenia, "internalised" false confessions for crime, apparently false or distorted memories for child abuse, pseudologia fantastica, the acquisition of new identities or "scripts" following fugue or in multiple personality, and momentary confabulation in healthy subjects."1 The academic psychology over confessions is mistaken when it presumes that establishing innocence is the purpose of law. Rather, trials happen because a crime has been committed and the law seeks to establish guilt, to punish the guilty. Psychology does not punish, as shown by Kassin and Gudjonnson, instead it designs confessional experiments. (Kassin and Gudjonnson 2004) Many experiments have inbuilt tricks to deceive, replicating experimenters' expectations, in much the same way that many pupils in the classroom replicate teachers' expectations. (Rosenthal and Jacobson 1968) An example of a study devised by psychologists includes a reaction time experiment. After warning participants not to hit a key that caused the machine to crash, experimenters deliberately crashed the machine, reasoning that participants could be made to confess. In many cases the participants did falsely confess, guiltily participating in the psychologists' study, whilst they believed the experiment was about reaction time. Legal cases abound where innocents have been convicted. In 2005, prosecutors forced a confession from a fourteen year-old boy, who confessed to murder in Illinois. The victim found an intruder in his parked car and was shot in the chest. The boy described to prosecutors how he broke into the car, struggled with the man and then shot him, after two weeks in detention and suggestions that he would go to prison for ten to fifteen years and that he would receive legal help. Moreover, the boy was encouraged to plead self-defence, in spite of the fact that the murderer had broken into the victim's car with a gun, firing it lethally. Another example comes from Escondido, California, where Michael Crowe, 14, confessed to the murder of his sister. He was falsely told by prosecutors that his hair was found in his dead sister's hand, that her blood was in his bedroom and that he failed a polygraph. He came to believe that he had an alter ego and confessed after hours of questioning with neither a

Wednesday, July 24, 2019

540 team paper Essay Example | Topics and Well Written Essays - 750 words

540 team paper - Essay Example That is, when both the companies announced their plans to merge, the financial issue that impeded the progress of the actualization of merger is the view, that, large stock transactions between HP and Compaq appear to be statistically more risky. So, immediately after the announcement, â€Å"H-Ps stock closed at $18.87, down sharply from $23.21 the previous trading day. On May 3, 2002, when the deal was officially consummated, the stock ended the day at $17.44† (Knowledge@Wharton, 2004). But, this skepticism and the initial problems were overcome by the HP management by looking at the positive aspects of the merger. That is, the HP management wished that through large stock transactions (which were considered risky in the first place), HP will be able to â€Å"achieve annual cost savings of $2.5 billion, which will add $5 to $9 to each HP share; and at the same time will increase earnings per share by 13% during the first year following the merger† (cybermedia India Onl ine, 2003). This positive response or strategy worked in favor of the merger in 2002. So, the outcome of the company’s positive response to the financial issue, and the resultant merger, is HP was able to dominate the sector of desktops, laptops, and servers in various world markets. But, on the other side, even after the merger, there was opposition from the scions of HP founders. That is, both Walter Hewlett, son of HP founder William Hewlett and David Woodley Packard, son of co-founder David Packard opposed the merger for various reasons including the risks caused by large scale transactions. But, now the merged entity is functioning smoothly without any major hindrances. Like in the Lester scenario, in which the merger plans between Lester Electronics and Shwang-wa does not actualize due to the financial issues, the merger plans between French companies, Gaz de France and Suez O. K. also gets

Responsibilities of Public Administrators Essay Example | Topics and Well Written Essays - 250 words

Responsibilities of Public Administrators - Essay Example The excerpt outlines that the federal court judge ruled in favor of the authorities but not because of a clear verdict justified by law. McKelvey (2011) notes that administrators should show concern for the public good by executing actions that are justifiable to the citizens. The administrative duties need verification by the public as acts of common good prior to their implementation in a manner that does not threaten civil liberties. According to McKelvey (2011), public administrators should ensure the due process in the execution of their duties. That serves to prevent the infringement into individual citizens’ rights and avoids threat to civil liberties. Commands to terminate suspects’ lives should be backed by evidence. In this case, questioning the suspect and investigating into the matter would have been appropriate in substitution for the spray of missiles from the drone (McKelvey, 2011). Procedural respect towards citizens can help to avoid the execution of ci tizens based on null and non-existent hypotheses. Public administrators should abide by the law as well as show honesty and truthfulness while executing their tasks. As outlined in the law, the executive authority in question needed to have an arrest warrant before executing their duties. Contrary to that, the administrators never had an arrest warrant. Failure to justify the reason for the killing constitutes threat to civil liberties. It would have been proper to provide concrete justification for their actions while executing duty.

Tuesday, July 23, 2019

Research assignment Thesis Proposal Example | Topics and Well Written Essays - 750 words

Research assignment - Thesis Proposal Example It is as if the country encourages technology diffusion within its boundaries, but its relations with other countries are tied to impeding technology. Iraq’s war with Iran is the prime example of impeding technology diffusion. The rate of acquiring technology accelerated in weapon and warfare tactics. According to reports almost $94 million worth of US computer technology was sold to Iraq during that war (Hurst 58). But sadly all other areas of the governance were ignored. The economy of both countries suffered severe blows due to war. Technology progress was pushed back. The only beneficiary of the war is weapon industry. Education system is probably one of the biggest losers in this bloody game. Well developed countries nurture their education system to produce brilliant generation. Such an output contributes to the society. Iraq had not had the peace and resources to invest technology into its school system. Now the country is slowly rising from the ashes. Maybe in a few years’ time the country will start producing excellent technology. The Shia-Sunni conflict in Iraq is an old problem. Iraq’s history is plagued with these conflicts since the sixteenth century. The Ottoman Empire (Sunnis) and Iran (Shiites) were frequently fighting over Iraqi territory during that time span. Technology does not have religion, and it does not have sects. It needs a peaceful environment to bloom. Conflicts like Shias and Sunnis in Iraq are a major cause of impeding technology in that region. Shias and Sunnis can contribute much towards technology by cooperating but sadly they are involved in a tussle of power. No technology company would want to invest in an area where there is uncertainty. Even the well renowned universities would hesitate in opening research centers in a place where there is anarchy and chaos. Iraq is one such place where tech companies feel hesitant in putting up their factories and research & development centers. In

Monday, July 22, 2019

With diagrams compare Essay Example for Free

With diagrams compare Essay This type of communication is between the sender and the receiver is known as connectionless (rather than dedicated) Contrasted with packet-switched is circuit-switched, a type of network such as the regular voice telephone network in which the communication circuit (path) for the call is set up and dedicated to the participants in that call. For the duration of the connection, all the resources on that circuit are unavailable for other users. Voice calls using the Internets packet-switched system are possible. Each end of the conversation is broken down into packets that are reassembled at the other end. The principles of packet switching are as follow. Messages are divided into data packets, which are then directed through the network to their destination under computer control. Besides a message portion, each packet contains data concerning. The principles of packet switching are as follow. Messages are divided into data packets, which are then directed through the network to their destination under computer control. Besides a message portion, each packet contains data concerning: Â  The destination of the address; Â  The source identification; The sequence of the packet in the complete message; Â  The detection and control of transmission errors. Â  Pre-determined routing. With this method, the routing details are included in the packet itself, each switching exchange forwarding the packet according to the embedded instructions; Â  Directory routing. Each switching exchange has a copy of a routing table to which it refers before forwarding each packet. The appropriate output queue is determined from the table and the packet destination Diagram shown below: Identify three types of cabling used in data communication. State which one you would recommend in an implement requiring high security consideration and why? The three types of cables used in data communication are: Optical Fiber Coaxial Coaxial cable is a copper that is used by TV companies between the community antenna, and also the user homes and businesses. At times these cable are also used by telephone companies from their central office to the telephones near users. This is also widely installed for use in business and corporation Ethernet and other types of local area network. Coaxial cable is called coaxial this is because this includes one physical channel that carries the signal surrounded (after a layer of insulation) by another concentric physical channel, both running along the same axis. The outer channel serves as a ground. Many of these cables or pairs of coaxial tubes can be placed in a single outer sheathing and, with repeaters, they can carry information for a great distance. This is a diagram shown below: UPT UPT stands for Unshielded twisted pair. This cable is the most common kind of copper telephone wiring. Twisted pair is the ordinary copper wire that connects home and many business computers to the telephone company. To reduce crosstalk or electromagnetic induction between pairs of wires, two insulated copper wires are twisted around each other. Each signal on twisted pair requires both wires. Since some telephone sets or desktop locations require multiple connections, twisted pair is sometimes installed in two or more pairs, all within a single cable. For some business locations, twisted pair is enclosed into a shield that functions as a ground. This is known as shielded twisted pair (STP). The twisted pair is now frequently installed with the two pairs to the home, with the extra pair making it possible for you to add another line (perhaps for use of a modem) when you will need it. These twisted pair comes with each pair uniquely colour coded when it is packaged in multiple pairs. Different uses such as analogue, digital, and Ethernet require different pair multiples. Although twisted pair is often associated with home use, with a higher grade of twisted pair is often used for horizontal wiring in LAN installations because it is less expensive than coaxial cable. The wire that you buy at a local hardware store for extensions from your phone or computer modem to a wall jack is not twisted pair. It is a side-by-side wire known as silver satin. The wall jack can have as many five kinds of hole arrangements or pin outs, depending on what kinds of wire the installation you expects that will be plugged in (for example, digital, analogue, or LAN) . (Thats why you may sometimes find when you carry your notebook computer to another location that the wall jack connections wont match your plug. ) This is a diagram shown below: Optical Fiber. Optical fiber (or fiber optic) refers to the medium and the technology associated with the transmission of information as light pulses along a glass or plastic wire or fiber. Optical fiber carries much more information than the conventional copper wire and is in general not subject to electromagnetic interference and the need to retransmit signals. Most telephone company long-distance lines are now of optical fiber. Transmission on optical fiber wire requires repeaters at distance intervals. The glass fiber requires more protection within an outer cable than copper. For these reasons and because the installation of any of the new wiring is labour-intensive, few communities yet have optical fiber wires or cables from the phone companys branch office to local customers (known as local loops). A type of fiber known as single mode fiber is used for longer distances; multimode fiber is used for shorter distances. This is the diagram shown below: By analyzing and researching the three above cable I would recommend the Fiber Optic cable this is because I believe it has a high security and also has the following. Fiber optic cables have a much greater bandwidth than metal cables. This means that they can carry more data. Â  Fiber optic cables are less susceptible than metal cables to interference. Fiber optic cables are much thinner and lighter than metal wires. Data can be transmitted digitally (the natural form for computer data) rather than analogically. Identify the alternative forms of communication media and provide examples of their use in different forms of network. Microwave Microwave frequencies require a direct line of sight between sending and receiving station to operate. Microwave systems were the preferred method of communications transmission before the introduction of fiber optic. Radio The lowest-frequency domain that needed to name. This extends from wavelengths of a kilometre or so, the longest that will propagate through the interstellar medium, down to about a millimetre. The detection of radio radiation is often done using wave techniques rather than photon-counting, this is because of the low photon energies, and this offers distinct advantages for such applications as interferometer which astronomers working in the infrared and optical regimes view with some envy. From active nuclei, we often detect the synchrotron radiation in this range radiation produced energetic charged particles (mostly electrons) produce when they are deflected by the magnetic fields. a) Define the basic signal theory with the aid of diagrams? 1) In electronics, a signal is an electric current or electromagnetic field that is used to convey data from one place to another. The simplest form of signal is a direct current (DC) that is switched on and off; this is the principle by which the early telegraph worked. More complex signals consist of an alternating-current (AC) or electromagnetic carrier that contains one or more data streams. Data is superimposed on a carrier current or a wave this is by means of a process called a modulation. Signal modulation can be done by two main ways: analogue and digital. In recent years, digital modulation has been getting more common, while analogue modulation methods have been used less and less. There are still plenty of analogue signals around, however, and they will probably never become totally extinct. Except for DC signals such as telegraph and base band, all signal carriers have a definable frequency or frequencies. Signals also have a property called wavelength, which is inversely proportional to the frequency. 2) In some information technology contexts, a signal are simply that which is sent or received, thus including both the carrier and the data together. 3) In telephony, a signal has a special data that is used to set up or control communication. Almost everything in the world can be described or represented in one of two forms: analogue or digital. The principal feature of analogue representations is that they are continuous. In contrast, digital representations consist of values measured at discrete intervals. Digital watches are called digital because they go from one value to the next without displaying all intermediate values. Consequently, they can display only a finite number of times of the day. In contrast, watches with hands are analogue, this is mainly because the hands move continuously around the clock face. As the minute hand goes around, it not only touches the numbers 1 through 12, but also the infinite number of points in between. Early attempts at building computers used analogue techniques, but the accuracy and reliability were not good enough. Today, almost all computers are digital. Analogue and Digital Technology Analogue and Digital are the words we hear when people talk about Communication and Information Technology. What do the words Analogue and Digital mean? Analogy means a likeness between two things that are really quite different. For example the analogy between the brain and the computer or the heart and a pump. Digit means either a finger or toe, or one of the numbers 1 to 9. Some examples might help to explain what analogue and digital mean in technology. A simple example of analogue and digital technology Clocks are examples of analogue and digital technology. An analogue clock face can display the time without numbers. The hands keep moving all the time and they continue to rotate, just like the earth around the sun. This is the analogy between the movement of the sun and earth, and the hands of the clock. The digital clock displays the time in numbers, and the time displayed only changes at each minute. In the analogue clock the hands keep moving all the time, while the digital clock is more like an on and off movement. Each minutes the time moves and then stops for another 60 seconds, when it changes again. Some other examples of displaying information using analogue and digital forms. b) How the signal theory affects the choice of transmission methods and media? Analogue and Digital Signals Sound can be converted into analogue and digital electrical signals. Analogue Signal A microphone or handset of a telephone will convert sound into an analogue signal. The shape of the wave seen on an oscilloscope represents the volume and pitch. The diagram is shown below: This is called an analogue signal because, when the volume and pitch change, so does the shape of the wave. The signal is an analogue of the sound. Digital signal Today we see many sound systems described as digital. This means the sound is converted into digital signals so it can be transmitted or recorded. In the microphone example shown on the diagram above, the analogue signal is converted into a digital signal by electronic circuits. In a digital signal the electricity, this can be either on or off, is combined with a binary code. The voltage of the analogue signal is measured electronically, many thousands of times per second, by an analogue-digital converter. The analogue signal is converted into a 16 bit binary number, which gives 65,536 levels of voltage. In electronics 1 = ON and 0 = OFF. This means the binary number can be converted into an electrical signal. A diagram below shows the process of converting analogue signals into a binary numbers and digital signals. To keep the explanation simple the analogue signal has been converted into a 3 bit binary number, which means there are seven voltage levels. A digital-analogue converter reverses the conversion this is because the speakers (output device) need an analogue signal. Light and sound can be converted into binary numbers and digital signals that are used to record and transmit information. This diagram is shown below: Why are digital systems better than the analogue ones? An analogue signal is affected by changes in the voltage as it travels along a wire. If the voltage changes, so does the signal at the output. The digital signal is not affected by changes in the voltage this is because all that matters is whether it is ON or OFF. How signal affects transmission methods? Noise is any sound on the CD or record that wasnt there at the performance during the recording session. More generally, it is any unwanted signal that adds on to the information that is being transmitted. When a vinyl record is being made, noise is introduced at every step of the recording process, although of course the company makes an every effort to reduce such noise to as low a level as possible. The sound that reaches the microphones is converted into an electrical signal that is then recorded on a wide magnetic tape moving at high speed. This tape is then used to control the cutting of a master disc, from which moulds are then made. These in turn are used to mass-produce the records that are eventually sold in shops. Noise is produced at every step, not forgetting that introduced by your own stereo equipment. It can never be entirely eliminated. The same problems of noise are shared by any method of transmitting information, and are certainly by telecommunications, including telephone calls. In the production of vinyl records, the companies have used purely analogue this means to transfer the information representing the sound of the music from one point to another. That means they use an electrical signal that changes smoothly in strength, exactly modelling the smooth but complex changes in the sound. When a noise is created in the recording process because of tape hiss, dust on the master disc, electrical interference or any other cause this is added on as a random signal on top of the complex electrical signal representing the sound. There is no way that electronic equipment can tell such random noise from the original electrical signal, so there is no way it can be removed again without removing some of the original signal. We can see more clearly if we draw a graph of the level of the analogue audio signal over a period of time (diagram 1a). The shape of this graph represents both the changes in the electrical sound and the changes in the electrical signal that model it. Now if we add to this audio signal some random noise, this affects the shape of the signal, and this degrades the sound that your stereo reproduces (diagram 1b). The trouble with an analogue audio signal is that its exact shape has to be preserved if you are to hear the music exactly as it was when it was played. If there were a means of transmitting the signal so that only the overall shape of the signal mattered, then noise would not be so important. The port authorities used to find the shape of the bottom of the harbour, so that ships could navigate more safely. It certainly wasnt possible to drain the harbour and take a photograph of it, so what they did instead was send out a boat which travelled slowly across the harbour. Every few meters a person at the back of the boat dropped down a plumb-line (a weight at the end of a rope), until it reached the bottom of the harbour. The line had knots tied in it at regular spaces and the person called out the number of knots under water, so indicating the depth of the harbour at that point. A clerk wrote these down, and eventually it was possible for him to draw a graph of the shape of the harbour by using these numbers. The person in the boat had been taking samples of the depth of the harbour at frequent intervals, so that the graph would accurately describe the ups and downs of the harbour bottom.

Sunday, July 21, 2019

Travel Time Reliability Analysis

Travel Time Reliability Analysis CHAPTER TWO Literature Review 2.1 Introduction Lyman (2007) states that travel time reliability is vital measure of congestion and can serve as benchmark for prioritizing improvements into a city transportation system. This research start with a literature review of travel time reliability and its worth as a congestion measure. Travel time reliability can be denoted as the probability of successfully completing a trip within specified time interval (Iida, 1999). Therefore, the increase of travel time will lead to the unreliability and variability of travel time (Recker et al., 2005). The better understanding of travel time reliability and variability might assist transport planner to select proper transport policy in conjunction with reduction congestion problems as well as lessening the impact of different type of incidents (Recker et al., 2005). It can be said that, the more reliable the transportation system, the more stable is the performance. In addition, lower travel time fluctuation also contributes to less fuel consumption as well as less emissions due to a reduced amount of acceleration and deceleration by vehicles (Vlieger et al., 2000). Moreover, from a transport users point of view, more reliable travel times mean more predictable journey times and improved activity schedules. In accordance with just in time services, reliable travel time will significantly increase the freight industrys performances to deliver goods (Recker et al., 2005). As travel time reliability considers the distribution of travel time probability and its variation at road network, the higher travel time variance the lower travel time reliability (Nicholson et al., 2003). It can be also said that under ideal conditions travel time reliability would have a variance equal to zero. Indeed, the increase of its variance will therefore significantly reduce its reliability. However, the relationship between travel time variance and its reliability is not linear, so that, it cannot be generally accepted that a double of travel time variance will lead to a half of its reliability. To conclude, the greater travel time fluctuations will have significant impacts on transport network reliability. According to different purposes of travel time reliability study, there are several travel time reliability surveys. By comparing different aspect of the travel time study and by considering the complexity of data collection as well the data analysis, Lomax et al. (2003) has reviewed the suitable assessment of travel time reliability. Based on the scope and the limitation of each method this work suggested the different study in terms of measurement travel time variability and travel time reliability. The analysis of the archive traffic data is not proper in measuring the travel time reliability due to the lack of data constant and the lack of other attribute related with the traffic condition. However, the data is easy to obtain. In addition, the micro simulation techniques have been used extensively, however according to Lin et al (2005) there are some deficiencies in travel time micro simulation modeling in terms of the high need for data calibration. In order to gain real life traffic conditions, some travel time reliability research used the probe vehicle methods. Since this method requires ext ensive labour and only covers some of the study area or some of the road segments, it cannot be applied in terms of assessing the travel time reliability on large road networks. Indeed, Lomax et al also recommended some reliability measurements by examining the reliability and variability percentage (e.g., 5%, 10% and 15%). Those approaches take into account the effect of irregular conditions in the forms of the amount of extra time that must be allowed for travelers. The first measurement is the percent variation which expresses the relationship between the amount of variation and the average travel time in a percentage measure. The second is the misery index that calculates the amount of time exceeded the average slowest time by subtracting the average travel time with the upper 10%, 15% and 20% of average travel rates and the last is travel time buffer which add the extra travel time of 95% trips in order to arrive on time. In addition, since reliable travel time is the key indicator of users route choice there are many recent research works which investigated the travelers behaviour under unreliable travel time. According to travelers behavior in route choice survey, the greater the variance of travel time of selected links the less attractive it is (Tannabe et al., 2007). Additionally, Bogers and Lint (2007) investigated traveler behavior on three different road types in The Netherlands under uncertainty conditions, as well as the impact of providing traveller information on route choice. They conclude that providing traveler information has significant impact on effecting travelers decision, in addition, based on travelers experience they will choose the route with minimal travel time variance. It means that the routes that have high travel time reliability are not attractive for users. Indeed, according to Lomax et als review that the best alternative to measure the travel time variability and route choicer behaviour under uncertainty condition is by using probe vehicles. Though this method was highly labourious and expensive, it is more realistic (Lomax et al., 2003). Then Tannabe et al (2007) undertook an integrated GPS and web diary in Nara, Japan. This study found that travelers might change their route to reduce the uncertainty in travel time. In addition, there was a positive correlation between coefficients of variation (CV) of the commuting routes. It is found that the appropriate functional hierarchy of road may be disturbed by the uncertainty of travel time. These findings suggest that a reliability index of travel time is very useful and important for evaluating both actual level of service (LOS) and functional hierarchy of road network. Recent travel time reliability research investigated the relationship between the traveler behavior and their response to the provision of travel information system while they experience high travel time variability. Asakura (1999) concluded that the Stochastic User Equilibrium model can generate the user route choice behavior based on the different levels of information provision. This study analyzed two different groups, the first group being the well informed users and the second the uninformed users. He concluded that providing better information can improve the transportation network reliability. In order to find out the different perspectives of travel time reliability for different persons with different purposes, Lo et al (2006) studied the notion of the travel time budget, in which each traveler seeks to minimize their own individual travel time budget (the amount of time that the individual is prepared to devote to travelling), which means the total travel time of the individual should not exceed their allocation of time to travel. To evaluate the link between the presence of ramps on motorways and travel time reliability, recent reliability network research has been undertaken in The Netherlands. Th is study analyzed whether the geometry of road network also affected the travel time reliability (Tu et al., 2007) by investigating the presence of ramps on six major. This study concluded that the presence of ramps in the road network has reduced the travel time reliability. Since road network reliability considers the probability of transportation system failures in how to meet performance parameters such as reasonable travel time and travel cost, level of service and the probability of connectivity of the transport network and lack of measuring the consequences of link failure to the community, the concept of road network vulnerability might be an alternative way to fill some of road network reliability deficiency, particularly in assessing the adverse socio-economic impact to community (Taylor et al. 2006). ROAD NETWORK VULNERABILITY Due to the potential socio-economic cost of degraded transport network to community, the concept of road vulnerability has been developed by researchers under transport network reliability umbrella. The definition of vulnerability has not yet been generally agreed. Several authors notion of the vulnerability focused on the negative events that significantly reduced the road network performance. Berdica (2002) defined the vulnerability as a susceptibility to incident that can result in a considerable in road network serviceability. The link /route/road serviceability described the possibility to use that link/route/road during a given period of time. Furthermore, since accessibility depend on the quality of the function of the transportation system, this concept relate to the adverse of the vulnerability in terms of reducing accessibility that occurs because of the different reasons. As the idea of network vulnerability relates to the consequences of link failure and the potential for adverse socio- economic impacts on the community (Taylor et al., 2006, Jenelius, 2007a), thus vulnerability can be defined in the following terms: 1. A node is vulnerable if loss (or substantial degradation) of a small number of links significantly diminishes the accessibility of the node, as measured by standard index of accessibility. 2. A network link is critical if loss (or substantial degradation) of the links significantly diminishes the accessibility of the network or of particular nodes, as measured by standard index of accessibility. Therefore, it can be concluded that road vulnerability assesses the weakness of road network to incidents as well as adverse impacts of the degraded road network serviceability on the community. In relation with the road network vulnerability definition which focuses on two different aspects; selecting critical road network elements and consequences of measurements, Jenelius (2007a) has identified that road network vulnerability assessment can be distinguished into two stages. The first stage is to select a critical link by identifying the road network likelihood and by quick scanning of wide road transport and the second one is measuring the consequences of link disruption to community. Based on previous works, different approach has been applied in order to scan wide road network. Jenelius et al ( et al., 2006) selected particular major arterial road which connect the district at the Northern Sweden to be the worst case scenario and selected road links randomly as the average case scenarios. Scott et al (2006) has also introduced topology index and the relation between capacity and volume then select the critical link. Indeed, Jenelius (2007a) has suggested that conducting comprehensive assessment of road network will be helpful for identifying roads that are probably affected by the traffic accident, flood and landslides. Berdica et al (2003) undertake a comprehensive study in order to test 3 types of software to mode l road network interruptions. This study simulated the short duration of incidents on University of Canterbury networks by using SATURN, TRACKS and Paramics. They modelled a total block of one link on the small network then run the model at the macroscopic level by using TRACKS, at mesoscopic level by using SATURN and at the microscopic level by using Paramics. Based on the simulation, the different packages gave different result in terms of their responsiveness to model the short incidents, for instance, Paramics might be considered as a suitable software package for short duration incidents because it is more responsive than other softwares. SATURN which is more detail in its formulation than TRACKS has less responsiveness than TRACKS. Given the lack of generally recognized measurement of road vulnerability, it has been common practice to consider measures such as the increase of the generalized travel cost, the changes of the accessibility index or the link volume/capacity ratio when one or more links were closed or degraded as road vulnerability measurement. Taylor et al (2006) studied the network vulnerability at the level of Australian national road network and the socio economic impact of degradable links in order to identify critical links within the road network, by using three different accessibility approaches. The study introduced the three indices for vulnerability. The first method was the measurement of the change of the generalized travel cost between the full network and the degraded one. This method has concluded that by degrading one particular link the generalized travel cost will increase, and then the links which gave the highest travel cost was determined as the most important link. The second method used the changes of the Hansen integral accessibility index (Hansen, 1959) in order to seek the critical links. It was assumed that the larger the changes were after cutting one link, the more critical that link was on the basis of the adver se socio-economic impacts on the community. The last approach considered the changes of the Accessibility/Remoteness index of Australia (DHAC, 2001). This method was similar to the second method which sought the critical link depending on the difference between the ARIA indices in the full network and the ARIA indices in degraded network. Moreover, Taylor et al (2006) also studied the application of the third approach at the regional level in the state of Western Australia. This study concluded that removing a link gave different impacts for the cities, for example, by cutting one link, the impacts on the several cities were only local, in contrast, other cities where they were available similarly alternative road performance did not give significant changes of the ARIA indices. Due to the importance of a particular link within the wide road network, Jenelius et al (2006) introduced a similar approach to Taylor et al (2006). They studied the link importance and the site exposure by measuring the increase in generalized travel cost in the road network of the Northern Sweden where the road networks were sparse and the traffic volumes were low. By assuming the incident was a single link being completely disrupted or closed so the generalized cost increases, then the most critical link of the operation of the whole system and the most vulnerable cities because of the link disruptions were determined. The study concluded that the effect of closing a link was quite local and the worst effect was in the region where the road network was sparser with fewer good alternative roads. This research suggests that the road network vulnerability assessment can be applied in road network planning and maintenance, to provide guidance to the road administration for road prioritization and maintenance. In addition, Taylor (2007) studied the road network vulnerability in South Australia road network which included all the freeways, highways and major main roads. This research used a large complex road network and evaluated the ARIA indices changes for about 161 locality centers with populations exceeding 200 people. This study found the top ten critical links in the South Australia regional road network. Moreover, in relation with vulnerability approach in D Este and Taylor (2003), Chen et al (2007) tries to assess the vulnerability of degradable networks by using the network based accessibility and by combining with a travel demand model. Their study concluded that themodel can consider both demand and supply changes under abnormal conditions. Thus the vulnerability network assessment can be measured by considering the duration of the disruption (increase the travel time) and modeling the user equilibrium both the cases when there are alternative roads or the case when there are not alternative roads (Jenelius, 2007b). Indeed, Scott (2005) introduced the measurement of the Network Robustness Index by considering the ratio between the link capacity and link volume and assigning topology index for each link then test whether the particular links can cope with the changes of the traffic demand when one or more links were closed or degraded (Scott et al., 2005). Jenelius (2007b) introduced the new method in order to incorporate dynamic road condition and information by assessing the increase travel time using the extended of the user equilibrium model. This study assumed that there was no congestion and there was at least one alternative route between the origin and destination. Further, this study also assumed that the road users have perfect road information about the length of road closure so that they can decide whether they need either to take a detour or to go back to their origin and wait until the road reopened. This method calculated the additional travel time which is calculated since the road users were informed about the road closure, the waiting time until the road reopened. The difference between the normal travel time and t he additional travel time due to road closure was assigned as the increase travel time. However, this study did not take into consideration the change of the travel flow at the alternative routes. This assumed that the mix of the current and diverted traffic can flow at the free flow. In order to assess the increase of the flow when the diverted traffic mix at the current traffic which already meet the capacity or are already congested, the study which conducted by Lam et al (2007) can be considered. This method introduced the path preference index which is the sum of the path travel time reliability index and the path travel time index. To examine road network vulnerability in an urban area, Berdica et al. (2007) studied the vulnerability of the Stockholm road network by examining 12 scenarios involving partial and total closure of selected links, including bridge failure. Also, it assessed the road network degradation in three different times of day, morning peak hour, middle of d ay and afternoon peak hour. This study concluded that by closing one link or all links as well as bridge failure would increase the total travel time and total trip length (on the assumption that travelers chose their minimum time route based on user equilibrium method). The model of different scenarios at different times gave different results but the most vulnerable links were the Essinge route and the failure of Western bridge scenario. To conclude this study calculated the increase of total travel time a day and then multiply that by 250 days to obtain the total increase travel time for yearly basis. Though the highest total travel time increase in only 8% per day, however if it is calculated by 35 SEK (travel cost per hour) it gave significant impact of total travel cost increase. However, it did not take into account the duration of the closure and left some discussion of link disruption impacted such as the effect of noise and pollution during the road closure. Moreover, Knoo p and Hoogendoon (2007) assess the spillback simulation in dynamic route choice in order to examine the spillback effects then evaluated the road network robustness and the vulnerability of links. This study concluded that it is necessary to assess the spillback effect in order to identify the most vulnerable link within the wide road network. Tampere (2007) investigated the vulnerability of highway sections in Brussels and Ghent. This work was quite challenging, it tried to consider the different aspect of the road network vulnerability criteria related to the amount of vehicle hours lost due to major incidents. This work compromised of two steps; the first one is the quick scanning of the most vulnerable link from the long list into short list by considering the several aspects and then by obtaining the short list links then the vulnerability measure was conducted. Since this method used the dynamic traffic assignment, there are some drawbacks during the model run such as the lack of traffic distribution after the occurrence of the incident which resulted an illogical of travelers route choice. In general this method has successfully measured the vulnerability by not only considering the traffic condition but also taking into account the different road networks. Though this method has not considered traffic assignment criteria, it is still considered as a refinement over similar studies Measures of Congestion used in Transportation Planning Measures of congestion are intended to evaluate the performance of the transportation network and to diagnose problem areas. They provide information on how well the system has met certain stated goals and targets, and can also help to explain variations in user experiences of the system. There are four general categories of congestion measures. The first category contains measures that explain the duration of congestion experienced by users in some way; these include delay, risk of delay, average speed, and travel time. The next category includes measures that analyze how well the system is functioning at a given location. This category primarily consists of the volume to capacity (V/C) ratio, which is usually expressed as a level-of-service (LOS) category. LOS is a performance rating that is often used as a technical way to express how well a facility is functioning. For example, a facility functioning poorly is likely to be rated as LOS F, but could just as easily be described as poor. The third category is that of spatial measures, including queue length, queue density, and vehicle miles traveled. It is important to note that some of the duration and spatial measures are actually measured as point measures. The final category of measures is the other category, consisting primarily of travel time reliability and the number of times a vehicle stops because of congested conditions. Easily the most common measure of traffic congestion is the volume-to-capacity ratio. The V/C ratio measures the number of vehicles using a facility against the number of vehicles that the facility was designed to accommodate. This ratio is an important measure for planners to use, and represents an easily understandable measure of whether or not a roadway is congested. However, it can lead to some philosophical problems, such as whether transportation systems should be built to handle the highest demand or the average demand, and what level of service is acceptable. In addition, it is difficult to accurately measure the capacity of a roadway. The volume-to-capacity ratio is an important tool for comparing a roadways performance to other roadways and over time, but does not necessarily reflect the overall user experience and values in the system. Despite the prevailing usage of the volume-to-capacity ratio, and perhaps because of its inherent philosophical difficulties, the (FHWA) ha s strongly encouraged agencies to consider travel time experienced by users as the primary source for congestion measurement. They also state that currently used measures of congestion are inadequate for determining the true impact of the congestion that clogs up the transportation system from a users perspective, and that they are not able to adequately measure the impacts of congestion mitigation strategies. What is travel time reliability? As mentioned in section 1.1.1, the OECD (2010) provides a general definition for Travel Time Reliability: The ability of the transport system to provide the expected level of service quality, upon which users have organized their activities. The key of this definition is that a route is reliable if the expectations of the user are in accordance with the experienced travel time. But this does not directly lead to a TTR measure. Nonetheless, this definition shows that user expectations should be taken into account when selecting a proper TTR measure. Congestion is common in many cities and few people will dispute this fact. Drivers become used to this congestion, always expecting and plan for some delay, especially in peak driving times. Most drivers budget for extra time to accommodate traffic delays or adjust their schedules. Traffic delays are mostly much worse than expected when it happens. All travelers are less tolerant of unpredicted delays, the effect is that it makes then to be late for work or vital meetings, miss appointment, or suffer additional childcare fees. Shippers and freight forwarders who experience unpredicted delay may lose money and interrupt just-in-time delivery and manufacturing processes. Traffic congestion used to be communicated only in terms of simple average in time past. Nevertheless most travelers experience and remember a different thing than the simple average as they commute within a year. Travelers travel time differ from day to day, and remember the few bad days they suffered through unexpect ed delays. Commuter build time cushion or buffer in planning their trip to account for the variability. The buffer helps them to arrive early on some days, though not a bad thing, but the additional time is carved out of their day time which could have been used to pursuit other activities than to commute. Travel time reliability time frames Travel Time Reliability can be categorized by its time frame. Bates et al. (2001) discusses three levels of variability: inter-day, inter-period and inter-vehicle. Martchouk et al. (2009) explains these as follows: Inter-day: Variations in the travel time pattern between days. Some days of the week might have substantially different traffic volumes than others. For example, a Sunday will generally have less traffic than a Monday. Same weekdays should have about the same travel time pattern, but there can still be variations. Also, events such as road works or inclement weather cause inter-day variations. Inter-period: Variations in travel times during a day. Many road sections have a morning and evening peak, during which travel times are larger. These variations are caused by variations in traffic volume. Inter-vehicle: Relatively small differences in travel times between vehicles in a traffic stream. These are caused by interactions between vehicles and variations in driver behavior, including lane changes and speed differences. Although Martchouk et al. (2009) shows that individual travel times on a motorway section can vary strongly in similar conditions, due to driver behavior, this study focuses on inter-day variations. It is assumed that inter-vehicle variations have no significant influence on Travel Time Reliability. In urban areas, the speed difference between vehicles will generally be smaller than on highways. The reasons for this are: the average speed on highways is higher, there is more overtaking, trucks cannot drive at the maximum allowed speed, and routes are longer. Inter-period variations are also not considered, because it is presumed that road users know that travel times within a day vary according to a more or less fixed pattern. It is the deviations from this daily pattern which are interesting in the light of TTR, since these cannot be predicted by road users. Therefore, the focus of this investigation is on inter-day variation. Why travel time reliability is important? Travel time reliability is vital to every user within the transportation system, whether they are freight shippers, transit riders, vehicle drivers and even air travelers. Reliability allows business travelers and personal to make better use of the own time. Because reliability is so significant for transportation planners, transportation system users, and decision makers should consider travel time reliability as a key measure of performance Traffic management and operation activities is better quantified and beneficial to traffic professionals by the use of travel time reliability than simple average. For instants take into consideration a typical before and after study that attempts to quantify the benefits of an accident management or ramp material program. The development in average time may seem to be modest. However reliability measure will show a much greater development because they show the effect of improving the worst few days of unexpected delay. The Beginning of Travel Time Reliability as a Performance Measure Hellinga (2011) states that in the past, analysis of transportation networks focused primarily on the estimation and evaluation of average conditions for a given time period. These average conditions might be expressed in terms of average traffic stream speed; average travel time between a given origin and destination pair; or some average generalized cost to travel from an origin to a destination. This generalized cost typically includes terms reflecting time as well as monetary costs. These terms are summed by multiplying the time based measures by a value of time coefficient. A common characteristic of all of these approaches is that they reflect average or expected conditions and do not reflect the impact of the variability of these conditions. One reason for this is that models become much more complicated when this variability would be included. Also, a vast amount of data from a long period of time is needed. Unfortunately, collecting data is often costly and time-consuming. H ellinga (2011) also observes that more recently, there has been an increasing interest in the reliability of transportation networks. It is hypothesized that reliability has value to transportation network users and may also impact user behavior. Influence on traveler behavior may include: destination choice, route choice, time of departure choice, and mode choice. It is useful for road managers and planners to have knowledge about the relations between TTR and road user behavior, because this can be used to predict or even deliberately influence this behavior by applying traffic management measures. Consequently, there has been an effort to better understand the issues surrounding reliability, and to answer a number of important questions such as: 1. How is transportation network reliability defined? 2. How can/should network reliability be measured in the field? 3. What factors influence reliability and how? 4. What instruments are available to network managers, policy makers, and network users that impact reliability and what are the characteristics of these causal relationships? 5. What is the value of reliability to various transportation network users (e.g. travelers, freight carriers, etc.) and how is this value affected by trip purpose? 6. How do transportation network users respond to reliability in terms of their travel behavior? (E.g. departure time choice, mode choice, route choice etc.) 7. How can reliability (and its effects) be represented within micro and macro level models? (Microscopic models focus on individual vehicles, while macroscopic models pertain to the properties of the traffic flow as a whole.) 8. How important is it to consider the impact of reliability in transportation project benefit/cost evaluations? 9. Does the consideration of the impact of reliability within the project evaluation process alter the order of preference of projects within the list of candidate projects? Hellinga (2011) states that the above list of questions, which is likely not exhaustive, indicates that there currently exists a very large knowledge gap with respect to reliability. Various research efforts around the world are beginning to fill in these gaps, but the body of knowledge is still relatively sparse and there is not yet even general agreement on terminology. Note that the first, second, and (partially) fifth question are part of this investigation What measures are used to quantify travel time reliability? The four recommended measures includes 90th or 95th percentile travel time, buffer index, planning time index, and frequency that congestion exceeds some expected threshold. These measurements are emerging practices, some of

Globalization in the nineteenth and twentieth centuries

Globalization in the nineteenth and twentieth centuries Introduction What is globalization? Globalisation is the integration of cultures and economies across geographical boarders. Globalisation has made trade and communication possible throughout the world in the shortest possible time. Compare and contrast the main features of globalization in the nineteenth and twentieth centuries. The difference in globalisation in the nineteenth and the twentieth centuries are:- While free trade was imposed on the rest of the world markets in third world countries were opened simply because they were not independent nations. Direct foreign investments increased rapidly during 1870 to 1913. The first half of the nineteenth century saw free trade being practiced only by Britain. However, in the twentieth century government debt became tradable in the global market for financial assets. The similarities in globalization in the nineteenth and the twentieth centuries are:- In the nineteenth century international trade was attributed to trade liberalization, direct foreign investment increased rapidly during the nineteenth century. Lending at international bank was also substantial. The late nineteenth and early twentieth century witnessed a significant integration of international markets to provide a channel for portfolio investment flows. The cross-national ownership of securities including government bonds reached very high levels during this period. Also in the twentieth century there was an increase in the degree of openness in most countries, in international trade, investment and finance. While the second half of the twentieth century witnessed a phenomenal expansion in international trade flows. http://www.independent.co.uk/news/uk/politics/deglobalisation-what-is-it-and-why-britain-should-be-scared-1521674.html (accessed 01 November 2010 6:23 a.m.) What is deglobalisation? Deglobalisation is the disintegrations of the economies of the world to their individual status where they do not engage in trade, imports and exports with other countries. To what extent has the 2008 crisis and recession brought about deglobalisation? Globalisation brought with it free trade of goods and services between countries and boarders. Many persons left their countries of birth to migrate to other countries in search of a better life, nurses from as far as Trinidad were and still are being employed in England and America. Persons from anywhere in the world can go to America and enjoy a doubles which is a Caribbean (East Indian) delicacy. The debate on globalization continue as people try to make sure that the benefits of global trade outweigh the costs for all countries. However, with the recession of 2008 many developed and developing nations have felt the impact of the recession specifically in Europe and the United States. Recession is caused by inflation, where to much money is chasing to little goods. In Ireland, many home owners took out a second mortgage to purchase second homes. Regretably many of home owners were unable to repay these loan and the banks took control of thes properties. In many instances these homes were sold for less than the homeowner was owing to the financial institution. Many persons who migrated to these countries in search of a better standard of living and employment opportunities are now leaving these countries and returning to their country of birth. This is as a result of an increase of unemployment due to many companies being unable to pay its workforce and meet its overhead expenditures. Though economies of the world are experiencing economic recession, globalisation have to a large extent allowed many countries to survive since countries can still trade their goods and services with other countries with the hope of rebuilding their economies. To what extent do the positive aspects of globalisation outweigh its negative effects? According to Deepak Nayyar globalization is the expansion of economic transactions and the organisation of economic activities across the political boundaries of nation states. Globalisation is associated with increasing economic openness, growing economic independence and deepening economic integration in the world economy. Globalisation has allowed persons from all economic brackets to be exposed to what the world has to offer in terms of goods and services. Negative effects of globalization are:- Nayyar however, stated persons who cannot afford to purchase these goods and services are left frustration or alienation which can lead to increase in crime, violence and drugs. Some seek refuge in ethnic identities, cultural chauvinism. For example, in Trinidad and Tobago whenever an international performer is coming in there is usually a high incidents of robberies since persons who cannot afford to attend these show robs others in an attempt to do so. Globalisation have also resulted in a widening in the gap between the rich and the poor in the worlds population, as also between the rich and poor people within countries has widened. Income distribution within countries also worsened with globalization and income inequality increased. The incidence of poverty increased in most countries of Latin America, the Caribbean and Sub-Saharan Africa during the 1980s and the 1990s. Nayyar further went on to state that much of Eastern Europe and Central Asia experiences a sharp rise in poverty during the 1990s. Unemployment in the industrialised countries has increased substantially since the early 1970s and remained at high levels since then. Due to trade liberalisation there has been an increase in wage inequality between skilled and unskilled workers since the labour market being liberalised has also become highly competitive. An example many skilled construction workers from other caribbean countries and also China are being used locally in Trinidad in the construction section since there has been in short of this expertise in this area locally. M. Panic stated in the article negative issues with support what Nayyar also stated in his article the evidence of which are as follows:- Does Europe need neoliberal reforms? the extremely objectionable nature of the unregulated, free market version of the system was demonstrated globally in the 1930s with devastating consequences: its inherent tendency to prolonged and costly crises (the Great Depression, mass unemployment), social deprivation and division (extreme poverty for the many in the mass unemployment), social deprivation and division (extreme poverty for the many in the midst of great wealth for the few)à ¢Ã¢â€š ¬Ã‚ ¦ German economic growth and levels of unemployment, for so long among the most impressive in the industrialized world, were only slightly better. Again, empirical evidence in support of the neoliberal claim that unemployment in Germany was caused by over-regulation was found to be extremely weak (Fuchs and Schettkat, 2000, p. 238) Conclusion While, many world trade and export-led growth strategies are collapsing, surplus countries face big obstacles in expanding domestic demand, and many emerging market economies are in deep trouble. World trade is collapsing much faster than expected-and much faster than predicted on the basis of the past example of this can be seen in the United States and Europe specifically Ireland where many homeowners are unable to pay their mortgages. Globalisation have also resulted in the devaluation of the US dollar which is a direct impact of the recession that the country is presently facing. For any nation to be imbalance globally can only work to this country and its population disadvantage since the negative impacts are not only economic but also far reaching social issues. Therefore based on the information listed above I can conclude that the negative effects far outweigh the positive. APPENDIX A C:Documents and SettingsRAVidaleDesktopWorld trade volume rose in August after a dip in July; Eurozone only advanced market to see export growth; World industrial production also grew_filesWorld-trade-oct262010.jpg

Saturday, July 20, 2019

Abuse of Women in Alice Walkers Color Purple Essay -- Color Purple Es

The Abuse of Women in The Color Purple Alice Walker's The Color Purple is an excellent account of the life of poor black women who must suffer not only social ostracism due to gender and skin color but also women who suffer greatly at the hands of black men.   This is true in terms of infidelity, physical and verbal abuse, and sexual abuse.   The Color Purple revolves around the life of Celie, a young black woman growing up in the poverty-ridden South.   In order to find herself and gain independence, Celie must deal with all manner of abuse, including misogyny, racism and poverty.   When she is a young girl of just 14, Celie is sexually assaulted by a man she believes is her father.   She has two children by her rapist, both of who he takes to a Reverend.   When her mother dies, this man known as "Pa" marries Celie to a man she will only refer to as "Mr. ___."   Verbal and physical abuse is a constant in Celie's life.   The man she married makes her raise his two children from another marriage, despises her, and physically and verbally abuses her.   Celie is continually told she is skinny, ugly, and got nothing.   When Shug first meets Celie she says, "You sure is ugly" (Walker 48).   Celie is miserable with Mr. ___, a man who wanted to marry her sister Nettie.   Nettie comes to see her sister at Mr. ____'s house and tells her before departing, "Don't let them run over you.   You got to let them know who got the upper hand" (Walker 18).   Nettie and Celie both mature throughout the course of the novel, a maturation they keep abreast of through a series of letters exchanged with one another.   Despite the constant abuse visited upon Celie, she matures in the novel and becomes an independent woman.   She is able to do so partly... ...le are abused (as many black men in the South were by whites), they typically turn to abusing others.   This is exactly what we see in the novel and it is only the love, nurturing, and strength of the women that create some kind of socialization, bonding, and an atmosphere of love and security.   Without them there would be no such environment, but rather one existing on hatred, abuse, and sexual assault.   It is easy to see why Walker wrote this book to show that no matter how much unjust abuse one must endure, one can find a way to escape its confines and relearn how to feel and love.   The color purple is what most of the women in this novel are at one point from physical violence of one sort or another, but when it comes to their hearts they remain bright red and full of love.  Ã‚  Ã‚  Ã‚      WORKS   CITED Walker, A.   The Color Purple.   New York: Pocket Books, 1996.

Friday, July 19, 2019

Analysis of Exodus 21-24 Essay -- essays research papers

Exodus 21-24 was definitely quite an instructive piece of literature. It was almost raw in its nature as a text or â€Å"book† but more of reading an excerpt from a piece of non-fiction most similar to an instruction manual of some sort that you get when you buy a dissembled bike or desk. Something like being enrolled in a police academy there was definite sense of a master-slave relationship in the air. It is like something never before seen in the Torah, these chapters showed a whole new YHWH. The YHWH who is feared like the school principal in an elementary school, not even mom and dad has come on so strong as to the dos and donts of living life. It seems as if YHWH was pushed to such a point where YHWH has no choice but intervene into the lives of his children, and set the rules for the pl... Analysis of Exodus 21-24 Essay -- essays research papers Exodus 21-24 was definitely quite an instructive piece of literature. It was almost raw in its nature as a text or â€Å"book† but more of reading an excerpt from a piece of non-fiction most similar to an instruction manual of some sort that you get when you buy a dissembled bike or desk. Something like being enrolled in a police academy there was definite sense of a master-slave relationship in the air. It is like something never before seen in the Torah, these chapters showed a whole new YHWH. The YHWH who is feared like the school principal in an elementary school, not even mom and dad has come on so strong as to the dos and donts of living life. It seems as if YHWH was pushed to such a point where YHWH has no choice but intervene into the lives of his children, and set the rules for the pl...

Thursday, July 18, 2019

Major World War I Battles Essay

1914- The First Battle of the Marne. Up until September of 1914, the German army had steadily advanced through Belgium and France and was nearing the capital of France, Paris. Luckily, in the First Battle of the Marne, six French armies and one British army were able to stave off the German advance and set the stage for trench warfare for the next four years. 1915- Second Battle of Ypres- This was the second battle for the city of Ypres, which was located in western Belgium. For the Germans, this marked their first widespread usage of poison gas during the war. At Gravenstafl, Canadian troops were able to hold off the Germans by urinating into cloths and covering their faces with it. 1916- Battle of Verdun. The Battle of Verdun was meant to be Germany’s final push to break through French lines. A common expression was â€Å"to bleed the French white†. Both sides suffered immense casualties; however there was no clear victor even though the Germans were forced to withdraw. 1917- Battle of Caporetto. In this battle, otherwise known as the 12th Battle of Isonzo, Austro-Hungarian forces reinforced by German infantry finally broke through the Italian front line and routed the entire Italian army. Poison gas and storm troopers effectively contributed to the massive collapse of the Italian army. 1918- Battle of Cantigny. This was the first major battle involving U.S. forces up until that point in World War I. While, Cantigny was a relatively easy objective and was overshadowed by larger battles occurring elsewhere on the front, this battle was significant in demonstrating that the U.S. forces could be trusted to hold their own. 1. Up until the U.S. entrance into the war, the U.S. had already been providing massive amounts of supplies to the French and British, despite their claims of neutrality. This one-sided trading led to German attacks on U.S. merchant vessels and was one of the reasons the U.S. entered the war. The American Expeditionary Force did not actually face many battles as they arrived in Germany in early 1918. They did prove their worth and strength however in the Battle of Cantigny, where solely U.S. troops were able to capture the town of Cantigny and repulse several fierce German counterattacks. 2. Women had a huge role in the war effort at home, while African Americans directly contributed to the war effort. Women filled many of the jobs men left behind, especially in factories that were now facing huge demands for war supplies and low numbers of workers. Without women rising to fill these ranks, the American war effort would have been severely hindered. African Americans, although still discriminated and segregated in units, fought bravely and fiercely in World War I and earned the respect of many soldiers around them. 3. U.S. society

Compare and Contrast 1984-Brave New World Essay

Do you hold, then, what kind of world we atomic number 18 creating? (Orwell, 1950 p. 267)George Orwell, author of 1984 released in 1950, present the liking of a night club that proves to be a dystopia as it is completely based on disquietude and r arly does one see rejoicing while in the separate hand, Aldous Huxleys Brave New institution presents the liking of a functional utopia were feelings atomic number 18 destroyed and no one is sorrowful beca character they foolt live on happiness nonwithstanding exclusively this could smorgasbord by the hands of one outcast.These twain societies catch in different expressive styles-one through consternation and the other through psychological and material enjoyment- present successful ways to defend order and power, although they differ greatly and outcasts imbibe different aims and calls. In a family where fear is predominant, physical and mental capacities authorise a stagnant state as the entrust to survive and loyalty go away predominant. In a different society where men are ca-cad to the liking of their rulers and are controlled with drugs instead of fear, the meaning of a utopia groundwork disappear further yet subjects pass on think e reallything is undefiled.Finally a feel of false equality, manipulation, and fear allow entirety and utter control. In societies like the ones pictured in these 2 books, nothing is perfect and nothing is true. Members of these communities digestnot pick out what is true because this lead make them mother dangerous to their leaders. The use of fear in 1984 and the idea of titanic fellow facilite control as the idea of constant surveillance and Thought police puts everything a member of this society does to the audition and when they make a false move, they know they are done for.The scene where Winston dialogue round two plus two not being for or if graveness is a force that kit and caboodle sincerely depicts the kind of fear ins talled by the party. The unorthodoxy of heresies was common sense. And what was terrifying was not lone almost(prenominal) that they would kill you for thinking otherwise, save that they expertness be safe. For, after all, how do we know that two and two make quaternity? Or that the force of gravity whole kit and boodle? (Orwell, 1950 p. 80). As explained by the quote, doubting two(prenominal) the party said could end up in negative ramifications. It is incredible how spate can ad reasonable to these converts.Things that front so simple be questioned and believed, which is yet worse. The mutableness of the partys adherents is astonishing as they change whenever the part needs them to preserve the to the highest degree ridiculous ideas as if they were average and all of this is achieved through fear. Winston also mentions the incident that your mind can fail you. The most deadly danger of all was talk in your sleep. There was no way of guarding against that, so far as he could see. (Orwell, 1950 p. 64). nonetheless thinking erroneously about the parties flaws and going against their ideas can be deadly as sleep talking cannot be controlled and can always be heard.The triad example of fear and its installment in Winstons mind is when he receives the letter from Julia. One, much the more likely, was that the girl was an bear onr of the Thought Police () the thing that was pen on the paper might be a threat, a summon, an order to institutionalise suicide, a trap of some description. (Orwell, 1950 p. 106). This displays how fear can make something normal seem completely hazardous and how reliance on others can be purged as you do not know what to expect from anyone. steady though people may be un happy, this demonstrates to be successful as no one goes against the status quo.In origin to 1984, Brave New World doesnt need fear since if they want change, they create new beings to change or just bring out soma but politic, control is mainta ined. Physical and psychological manipulation gives a sense of order even though it is non-existent, and drug use maintains a false control that seems alright to everyone inner that sphere however, when someone notices this false control, he will become a problem. In Huxleys Brave New World, which is hypothetical to be a Utopia, equality is not present and this is what a utopia is supposed to be about.Within the social classes, the top ones still think of the lower ones as nugatory and basically inferior. Lenina demonstrates this through the following quote. What a hideous colour khaki is, remarked Lenina, verbalise the hypnopaedic prejudices of her caste. (Huxley, 1946, p. 42). This quote demonstrates that even messages coming from the brass promote separatist ideas and at the aforesaid(prenominal) time they promote equality. Drug use and psychological manipulation allows this to maintain epsilons happy with how they are, also maintaining absolute control oer society.Hypnopa edia as seen before, doesnt always promote the values of a Utopia as it should. some other hypnopaedic message demonstrating this is Every one works for everyone else. We cant do without any one. Even Epsilons are useful. We couldnt do without Epsilons. Every one works for everyone else. We cant do without anyone. (Huxley, 1946, p. 50). The use of framing is a very important constituent as it is a way of escaping the naive realism of a supposed utopia that in human race is everything but a utopia. Why you dont take soma when you ca-ca these dreadful ideas of yours. Youd forget all about them. And instead of feeling miserable, youd be jolly.So jolly, she repeated and smiled () (Huxley, 1946, p. 62). It is very important to bring to pass how this regularity of control still proves to be successful and allows formation for the g everyplacenment to preserve. Whether its installing fear, unavowed organizations, and complete surveillance or genuinely creating subjects, it is e vident that two methods are prospering as they sustain order and undemanding management of society. Winston, who was the soul and heart of change in 1984 , ended up impuissance and the idea, person, or whatever queen-size Brother is, who he hated the most truly ended up taking over him and it is mentioned in the novel.He won the advantage over himself. He loved Big Brother. (Orwell, 1950 p. 268) The provided man who was capable of create change and denouncing the artificiality his government was based on. aid has now proved to be a functioning method of control. In semblance to Brave New World, the outsider and only man capable of making others realize the lie they lived in ended up killing himself. Slowly, very slowly, like two unhurried compass needles, the feet turned towards the right north, north-east, east, south-east, south, south-south- west then paused, and, after a few seconds, turned as unhurriedly back towards the left.South-south-west, south, south-east, east. . (Huxley, 1946 p. 176) This also verifies the effectiveness of this method and according to this, both men failed to change the status quo. By the end of both novels, no change was made and both fear and manipulation proved to be effective ways of maintaining control. As the prepotency of both fear and manipulation grow, methods of changing society and its governance method become scarce and even those who go to extremes encounter themselves with unfeasible situations where physical and mental subject matter will be pushed to new limits but yet, not enough to revolutionize their societies.This is in general due to most of those who have been subjected and evaluate the reality in which they live in, which is what both Winston and John go through but their ways to accept it, were vastly different. Even though there are some with strong minds and others who have not been toyed with, it will never be enough to turn on fear nor manipulation of the human being.