Thursday, November 28, 2019

Alcoholism is a Disease free essay sample

In the U. S alone over 15 million people are currently affected by alcoholism. Alcoholism is a chronic and progressive disease that includes problems controlling your drinking, being preoccupied with alcohol, continuing to use alcohol even when it causes problems, having to drink more to get the same effect (physical dependence) or having withdrawal symptoms when you rapidly decrease or stop drinking. Alcoholism is a chemical disease because it breaks down differently in the stomach and has an entirely different effect on the brain of the alcoholic than on the non-alcoholic. The main organ involved with alcoholism is the brain. Alcohol interferes with the electrical charges of nerve cells that send messages to the brain about thoughts, feelings and learning. After chronic exposure to alcohol neurotransmitters are altered permanently and can also lead to brain shrinkage. Advanced states of alcoholism cause states of dementia psychosis and when their tolerance increases, alcoholics show signs of disorientation, paranoia and aggressiveness. We will write a custom essay sample on Alcoholism is a Disease or any similar topic specifically for you Do Not WasteYour Time HIRE WRITER Only 13.90 / page Heavy alcohol intake reduces some of the brains chemicals such as dopamine and serotonin. These chemicals give us the feeling of well-being and pleasure. At the same time alcohol releases chemicals that cause stress and depression. It is this chemical imbalance in the brain that may be responsible for alcoholism. Alcoholism is also progressive, which means it gets worse over time. Alcoholism causes biological, psychological, social and spiritual problems and as the disease progresses, the alcoholics ability to function daily declines. Personality changes are a result from neuropsychological impairments to the persons cognitive and affective functioning; they think, feel and behave differently than previously, but believe their functioning normally. Social or relationship problems arise within the family, the community and at work. Family life deteriorates to the point that treatment for family members is necessary for their own recovery. Another reason why alcoholism is a disease is because of genetics and environmental influences. It is estimated that 40-60% of the risk for developing any addiction, including alcoholism, is genetic. Studies with adoptees have shown that having a familial history of alcoholism increases the risk of developing an alcohol dependency. According to one study, having a familial history of alcoholism but being raised in a household without alcohol abusers still leads to a fivefold increase in the odds of becoming an alcoholic. However, environmental factors are still important the same study found a larger increase in the odds of becoming an alcoholic given both a family history of alcoholism and a pro- alcoholic environment. A persons surroundings can play a strong role on the road to potential alcohol abuse and alcoholism. An environment that promotes drinking can make it difficult for many people to avoid the temptations of drinking to excess. Opposing viewpoints argue that alcoholism is not a disease. One of the most common arguments against the disease of alcoholism is that the disease model is only useful for treating people who consider themselves alcoholics. Another claim is that excessive drinking can cause physical disease and involve physical dependence without therefore being a disease itself. It is also believed by skeptics that to be diagnosed with alcoholism means a person has to give up their identity as a normal person and take on the identity of someone with a disease. While all of these claims are understandable, there are faults within the claims that leaves â€Å"alcoholism is not a disease† an unsupported allegation. Alcoholism is described as a physical compulsion along with a mental obsession, which falls into the category of addiction. Addiction has been proven to be a brain disease, as mentioned earlier in this article the brain is the organ that goes hand in hand with alcoholism. The brain is what makes one crave an alcoholic beverage, it’s what causes one to have extreme withdrawal effects when the desire for alcohol is unmet. Alcoholism is an addiction, and addiction is a brain disease. Also, the belief that the alcoholics complex turns from a â€Å"normal† identity to a â€Å"diseased† identity is illogical. Diabetes is a disease, Aids is a disease and those who carry those diseases don’t let it define them or their lives. Alcoholism is a chronic and progressive disease that includes problems controlling your drinking, being preoccupied with alcohol, continuing to use alcohol even when it causes problems, having to drink more to get the same effect (physical dependence) or having withdrawal symptoms when you rapidly decrease or stop drinking. If alcoholism continues to be overlooked as a disease, we will continue to have over 80,000 deaths a year in the U. S from the excessive alcohol use. If you or someone you know is suffering from alcoholism, don’t let them remain untreated. There are several ways of curing this disease such as therapy, counseling and medications. Alcoholism is a disease and should/can be treated as such. Work Cited

Tuesday, November 26, 2019

The Productivity of Information Technology Essay Example

The Productivity of Information Technology Essay Example The Productivity of Information Technology Essay The Productivity of Information Technology Essay THE PRODUCTIVITY OF INFORMATION TECHNOLOGY: Review and Assessment Erik Brynjolfsson CCS TR #125 December, 1991 This research was sponsored by the MIT Center for Coordination Science, the MIT International Financial Services Research Center, and the Sloan Foundation. Special thanks are due Michael Dertouzos and Tom Malone for encouraging me to pursue this topic as part of a study group at the MIT Laboratory for Computer Science. I would like to thank Ernie Berndt, Geoffrey Brooke, and Chris Kemerer for valuable comments and Marshall Van Alstyne and Peter Perales for excellent research assistance. Only I am responsible for any remaining deficiencies The Productivity of Information Technology: Review and Assessment Erik Brynjolfsson Abstract Productivity is the bottom line for any investment. The quandary of information technology (IT) is that, despite astonishing improvements in the underlying capabilities of the computer, its productivity has proven almost impossible to assess. There is an increasing perception that IT has not lived up to its promise, fueled in part by the fact that the existing empirical literature on IT productivity generally has not identified significant productivity improvements. However, a careful review, whether at the level of the economy as a whole, among information workers, or in specific manufacturing and service industries, indicates that the evidence must still be considered inconclusive. It is premature to surmise that computers have been a paradoxically unwise investment. A puzzle remains in the inability of both academics and managers to document unambiguously the performance effects of IT. Four possible explanations are reviewed in turn: mismeasurement, lags, redistribution and mismanagement. The paper concludes with recommendations for investigating each of these explanations using traditional methodologies, while also proposing alternative, broader metrics of welfare that ultimately may be required to assess, and enhance, the benefits of IT. Keywords: Productivity, Computers, Performance measurement, Economic value, Investment justification. CONTENTS The Productivity Paradox A Clash of Expectations and Statistics . 1 Dimensions of the Paradox .. Economy-wide Productivity and Information Worker Productivity . . 7.. The Productivity of Information Technology Capital in Manufacturing 11 The Productivity of Information Technology Capital in Services . 15 Leading Explanations for the Paradox .. 19 Measurement Errors 20 Lags . 5 Redistribution .. 28 Mismanagement . . 229. 9. Con clusio. n.. 32 Summary . . . 32 Where Do We Go From Here? . 34 Tables and Graphs . .. 40 Bibliography . 4477. Information Technology and Productivity The Productivity Paradox A Clash of Expecta tions and Statistics The relationship between information technology (IT) and productivity is widely discussed but little understood. On one hand, delivered computing-power in the US economy has increased by more than two orders of magnitude in the past two decades (figure 1). On the other hand, roductivity, especially in the service sector, seems to have stagnated (figure 2). Given the enormous promise of IT to usher in the biggest technological revolution men have known (Snow, 1966), disillusionment and even frustration with the technology is increasingly evident in statements like No, computers do not boost productivity, at least not most of the time (Economist, 1990) and headlines like Computer Data Overload Limits Productivity Gains (Zachary, 1991) and Computers Arent Pulling Their Weight (Berndt Morrison, 1991a). The increased interest in the productivity paradox, as it has become known, has engendered a significant amount of research, but, thus far, this has only deepened the mystery. The results are aptly characterized by Robert Solows quip that we see computers everywhere except in the productivity statistics, and Bakos and Kemerers (1991) more recent summation that These studies have fueled a controversial debate, primarily because they have failed to document substantial productivity improvements attributable to information technology investments. Although similar conclusions are repeated by an alarming number of researchers in this area, we must be careful not to overinterpret these findings; a shortfall of evidence is not necessarily evidence of a shortfall. Nonetheless, given the increasing significance of IT in the budgets of most businesses and in the nation as a whole, continued investment cannot be justified by blind faith alone. Draft: 1/29/92 page 1 Information Technology and Pr oductivity This paper seeks to contribute to the research effort by summarizing what we know and dont know, by distinguishing the central issues from diversions, and by clarifying the questions that can be profitably explored in future research. After reviewing and assessing the research to date, it appears that the shortfall of IT productivity is at least as likely due to deficiencies in our measurement and methodological tool kit as to mismanagement by developers and users of IT. One can only conclude, as Attewell and Rule (1984) did in an earlier survey, that we still have much o learn about how to measure the effects of computers on organizations. While particular emphasis is placed on economic approaches to both theory and empirics in this review, it is hoped that the process of addressing the productivity mystery will prove to be a useful springboard for other methodologies as well and for examining the broader issues involved. As a prelude to the literature survey, it is useful to define some of the terms used and to highlight some of the basic trends in the economics of IT. Definitions: * Information technology can be defined in various ways. Among the most common is the category Office, Computing and Accounting Machinery of the US Bureau of Economic Analysis (BEA) which consists primarily of computers. Some researchers use definitions that also include communications equipment, instruments, photocopiers and related equipment, and software and related services. * Labor productivity is calculated as the level of output divided by a given level of labor input. Multifactor productivity (sometimes more ambitiously called total factor productivity) is calculated as the level of output for a given level of several inputs, typically labor, capital and materials. In principle, multifactor productivity is a better guide to the efficiency of a firm or industry because it adjusts for shifts among inputs, such as an increase in capital intensity, but lack of data can make this consideration moot. Draft: 1/29/92 page 2 Information Technology and Productivity * In pr oductivity calculations, output is defined as the number of units produced times their unit value, proxied by their real price. Establishing the real price of a good or service requires the calculation of individual price deflators, often using hedonic methods, that eliminate the effects of inflation without ignoring quality changes. Trends: * The price of computing has dropped by half every 2-3 years1 (figure 3a and figure 3b). If progress in the rest of the economy had matched progress in the computer sector, a Cadillac would cost $4. 98, while ten minutes labor would buy a years worth of groceries. 2 * There have been increasing levels of business investment in information technology equipment. These investments now account for over 10% of new investment in capital equipment by American firms3 (figure 4). * Information processing continues to be the principal task undertaken by Americas work force. Over half the labor force is employed in information-handling activities. (figure 5). * Overall productivity growth has slowed significantly since the early 1970s and measured productivity growth has fallen especially sharply in the service sector, which consumes over 80% of IT (figure 2). * White collar productivity statistics have been essentially stagnant for 20 years. figure 6) 1 In the last 35 years, the quality-adjusted costs of computing have decreased by over 6000-fold relative to equipment prices outside the computer sector [Gordon, 1987]. This relationship has been dubbed Moores Law after John Moore who first documented the trend in microprocessors. It is widely projected to continue at least into the next century. 2 This comparison was inspired by the slightly exaggera ted claim in Forbes, [, 1980 #279], that If the auto industry had done what the computer industry has done, Rolls-Royce would cost $2. 50 and get 2,000,000 miles to the gallon. The $4. 98 Cadillac is based on a price of $30,890 for a 1991 Sedan de Ville divided by 6203, the relative deflator for computers. The grocery comparison is based on a wage of $10 an hour and $10,000 worth of groceries, each in actual 1991 dollars. 3 Some studies estimate that as much as 50% of recent equipment investment is in information technology [Kriebel, 1989 #417]. This higher figure seems to be partly due to a broader definition of IT. A discrepancy also arises when recent investments are expressed in 1982 dollars, when IT was relatively more expensive. This has the effect of boosting ITs real share over time faster than its nominal share grows. Draft: 1/29/92 page 3 Information Technology and Productivity These facts suggest two central questions, which comprise the productivity paradox: 1) Why are companies investing so heavily in information technology if it doesnt add to productivity? ) If information technology is contributing to productivity, why have we been unable to measure it? In seeking to answer these questions, this paper builds on a number of previous literature surveys. Much of the material in section III is adapted from an earlier paper with Bruce Bimber (Brynjolfsson Bimber, 1990) which also included an annotated bibliography of 104 related articles and a summary of six explanations for the productivity paradox from outside the economics literature. An earlier study by Crowston and Treacy (1986), identified 11 articles on the impact of IT on enterprise level performance by searching ten journals from 1975 to 1985. They conclude that there had been surprisingly little success in measuring the impact of IT and attribute this to the lack of clearly defined variables which in turn stems from an inadequacy of suitable reference disciplines and methodologies. One natural reference discipline is economics and an excellent review of recent research combining information systems and economics, by Bakos and Kemerer (1991), includes particularly relevant work in sections on macroeconomic impacts of information technology and information technology and organizational performance. Because statistical work is central to the majority of the approaches to assessing IT productivity, another very useful survey is Gurbaxani and Mendelsons (1989) paper on the use of data from secondary sources in MIS research. In addition to summarizing the work that has already been done, they make a convincing case that using pre-compiled data sets has significant advantages over starting de novo with original data, as has been the more common practice among MIS researchers. Finally, many of the papers that seek to directly Draft: 1/29/92 page 4 Information Technology and Productivity assess IT productivity begin with a literature survey. The reviews by Brooke (1991), Barua, Mukhopadhyay and Kriebel (1991), and Berndt and Morrison (1991b) were particularly useful. Although over 150 articles were considered in this review, it cannot claim to be comprehensive. Rather, it aims to clarify for the reader the principal issues surrounding IT and productivity, reflecting the results of a computerized literature search of 30 of the leading journals in both information systems and economics,4 and more importantly, discussions with many of the leading researchers in this area, who helped identify recent research that has not yet been published. The remainder of the paper is organized as follows. The next section summarizes the empirical research that has attempted to measure the productivity of information technology. Section III classifies the explanations for the paradox into four basic categories and assesses the components of each in turn. Section IV concludes with summaries of the key issues identified and some avenues for further research. Dimensions of the Paradox Productivity is the fundamental measure of a technologys contribution. With this in mind, CEOs and line managers have increasingly begun to question their huge investments in computers and related technologies (Loveman, 1988). While major success 4 The journals searched included American Economic Review, Bell (Rand) Journal of Economics, Brookings Papers on Economics and Accounting, Econometrica, Economic Development Review, Economica, Economics Journal, Economist (Netherlands), Information Economics Policy, International Economics Review, Journal of Business Finance, Communications of the ACM, Database, Datamation, Decision Sciences, Harvard Business Review, IEEE Spectrum, IEEE Transactions on Engineering Management, IEEE Transactions on Software Engineering, Information Management, Interfaces, Journal of Systems Management, Management Science, MIS Quarterly, Operations Research, and Sloan Management Review. Articles were selected if they indicated an emphasis on computers, information systems, information technology, decision support systems, expert systems, or high technology combined with an emphasis on productivity. Draft: 1/29/92 page 5 Information Technology and Productivity stories exist, so do equally impressive failures (see, for example (Kemerer Sosa, 1990; Schneider, 1987)). The lack of good quantitative measures for the output and value created by information technology has made the MIS managers job of justifying investments particularly difficult. Academics have had similar problems assessing the contributions of this critical new technology, and this has been generally interpreted as a negative signal of its value. The disappointment in information technology has been chronicled in articles disclosing broad negative correlations with economy-wide productivity and information worker productivity. Econometric estimates have also indicated low IT capital productivity in a variety of manufacturing and service industries. The principal empirical research studies of IT and productivity are listed in table 1. Draft: 1/29/92 page 6 Information Technology and Productivity Table 1: Principal Empirical Studies of IT and Productivity Economy-wide Cross-sector or (Jonscher, 1983; Jonscher, 1988) (Baily Chakrabarti, 1988; Baily, 1986b; Baily Gordon, 1988) (Roach, 1987a; Roach, 1988; Roach, 1989b) (Brooke, 1991) (Osterman, 1986) (Grove, 1990) (Dos Santos, Peffers Mauer, 1991) Manufacturing (Berndt Morrison, 1991b) (Siegel Griliches, 1991) (Loveman, 1988) (Weill, 1988) (Dudley Lasserre, 1989) (Morrison Berndt, 1990) (Barua, Kriebel Mukhopadhyay, 1991) Services (Cron Sobol, 1983) (Strassman, 1985) (Baily, 1986a) (Roach, 1991; Roach, 1987b; Roach, 1989a) (Noyelle, 1990) (Brand Duke, 1982) (Pulley Braunstein, 1984) (Bender, 1986) (Bresnahan, 1986) (Franke, 1987) (Harris Katz, 1988; Harris Katz, 1989) (Parsons, Gotlieb Denny, 1990) (Weitzendorf Wigand, 1991) Economv-wide Productivity and Information Worker Productivitv The Issue One of the core issues for economists in the past decade has been the productivity slowdown that began in the early 1970s. There has been a drop in labor productivity growth from about 2. 5% per year between 1953-1968 to about 0. 7% per year from 1973- 1979. Multi-factor productivity growth, which takes into account changes in capital, declined from 1. 75% a year to 0. 32% over the same periods (Baily, 1986b). Even after accounting for factors such as the oil price shocks, changes in labor quality and potential measurement errors, most researchers still find that there is an unexplained residual drop in Correlations Models II . w w If i Draft: 1/29/92 page 7 i Draft: 1/29/92 Information Technology and Productivity page 8 productivity as compared with the first half of the post-war period. The sharp drop in productivity roughly coincided with the rapid increase in the use of information technology (figure 1). Although recent productivity growth has rebounded somewhat, especially in manufacturing, the overall negative correlation between economy-wide productivity and the advent of computers is at the core of many of the arguments that information technology has not helped US productivity or even that information technology investments have been counter-productive (Baily, 1986b). This link is made more explicit in research by Stephen Roach (1987a; 1988) focusing specifically on information workers, regardless of industry. While in the past, office work was not very capital intensive, recently the level of information technology capital per (white collar) information worker has begun approaching that of (blue collar) production capital per production worker. Concurrently, the ranks of information workers have ballooned and the ranks of production workers have shrunk. Roach cites statistics indicating that output per production worker grew by 16. 9% between the mid- 1970s and 1986, while output per information worker decreased by 6. 6%. He concludes: We have in essence isolated Americas productivity shortfall and shown it to be concentrated in that portion of the economy that is the largest employer of white-collar workers and the most heavily endowed with high-tech capital. Roachs analysis provides quantitative support for widespread reports of low office productivity. 5 A more sanguine explanation is put forth by Brooke (1991). Although he confirmed a broad-level correlation with declines in productivity, he hypothesized that this was due to increases in product variety which resulted in commensurate reductions in economies of scale. This hypothesis was supported by his finding of a positive correlation 5 For instance, Lester Thurow has noted that the American factory works, the American office doesnt, citing examples from the auto industry indicating that Japanese managers are able to get more output from blue collar workers (even in American plants) with up to 40% fewer managers. III Information Technology and Productivity between IT investment and the number of trademark applications. Because variety generally has positive value to consumers, but is ignored by conventional measures of productivity, this finding suggests a measurement problem, which is explored more fully in below in the section on mismeasurement. Comment Upon closer examination, the alarming correlation between IT and lower productivity at the level of the entire US economy is not compelling because so many other factors affect output and therefore productivity. Until recently, computers were not a major share of the economy. Consider the following order of magnitude estimates. Information technology capital stock is currently equal to about 10% of GNP, or total output. If, hypothetically, IT were being used efficiently and its marginal product were 20% (exceeding the return to most other capital investments), then current GNP would be directly increased about 2% (10% x 20%) because of the existence of our current stock of IT. However, information technology capital stock did not jump to its current level in the past year alone. Instead, the increase must be spread over about 30 years, suggesting an average contribution to aggregate GNP growth of 0. 06% in each year. 6 This would be very difficult to isolate because so many other factors affected GNP, especially in the relatively turbulent 1970s and early 1980s. Indeed, if the marginal product of IT capital were anywhere from -20% to +40%, it would still not have affected aggregate GNP growth by more than about 0. 1% per year and productivity growth by even less. 7 6 In his comment on Baily and Gordon (1988), David Romer notes that a similar argument applies to almost any capital investment. 7 In dollar terms, each white collar worker is endowed with about $10,000 in IT capital, which at a 20% ROI, would increase his or her total output about by about $2000 per year as compared with pre-computer levels of output. Compare to the $100,000 or so in salary and overhead that it costs to employ this worker and the expectations for a technological silver bullet seem rather ambitious. Draft: 1/29/92 page 9 Information Technology and Productivity This is not to say that computers may not have had significant effects in specific areas, like transaction processing, or on other characteristics of the economy, like employment shares, organizational structure or product variety. Rather it suggests that very large changes in capital stock are needed to measurably change total output under conventional assumptions about typical rates of return. However, the growth in information technology stock is still strong and the share of the total economy accounted for by computers is becoming quite substantial. Presumably, if computers are productive, we should begin to notice changes at the level of aggregate GNP in the near future. As for the apparent stagnation in white collar productivity, one should bear in mind that relative productivity cannot be directly inferred from the number of information workers per unit output. For instance, if a new delivery schedule optimizer allows a firm to substitute a clerk for two truckers, the increase in the number of white collar workers is evidence of an increase, not a decrease, in their relative productivity and in the firms productivity as well. Osterman (1986) suggests that this is why clerical employment often increases after the introduction of computers and Berndt and Morrison (199lb) confirm that information technology capital is, on average, a complement for white collar labor even as it leads to fewer blue collar workers. Unfortunately, more direct measures of office worker productivity are exceedingly difficult. Because of the lack of hard evidence, Panko (1984; 1991) has gone so far as to call the idea of stagnant office worker productivity a myth, although he cites no evidence to the contrary. Independent of its implications for productivity, growth in the white collar work force cannot be entirely blamed on information technology. Although over 38% of workers now use computers in their jobs8, the ranks of information workers began to 8 According to the US National Center for Education Statistics, 38. 3% of persons in the 1989 Current Population Survey used computers at work, including nearly 60% of those with four or more years of college. Interestingly, Kreuger [, 1991 #411] finds that workers using computers are paid an average wage premium of 8%, even after controlling for education, computer literacy and other factors. Draft: 1/29/92 page 10 Information Technology and Productivity surge well before the advent of computers (Porat, 1977). Jonscher (1988) even goes so far as to argue that causality goes the other way: the increased demand for information enabled economies of scale and learning in the computer industry, thereby reducing costs. These mitigating factors notwithstanding, the low measured productivity at the level of the whole economy and among white collar workers, especially in the face of huge increases in the accompanying capital stock, does call for closer scrutiny. A more direct case for weakness in information technologys contribution comes from the explicit evaluation of information technology capital productivity, typically by estimating the coefficients of a production function. This has been done in both manufacturing and service industries, and we review each in turn. The Productivity of Information Technology Capital in Manufacturing The Issues There have been at least seven studies of IT productivity in the manufacturing sector, summarized in table 2. A study by Gary Loveman (1988) provided some of the first econometric evidence of a potential problem when he examined data from 60 business units. 9 As is common in the productivity literature, he used ordinary least squares regression and assumed that production functions could be approximated by a Cobb-Douglas function. By taking the logarithm of all variables, he was able to estimate a linear relationship between changes in the log of output 0Â ° (q) and changes in the log of spending on key inputs, including 9 Namely, the Management Productivity of IT (MPIT) subset of the PIMS data set. 10 Where output was defined as (sales + net change in inventories)/ price index. Draft: 1/29/92 page 11 Information Technology and Productivity materials (m), purchased services (ps), labor (1), traditional capital (k), and information technology capital (c), while allowing for an exogenous time trend (), and an error term (? ): q = Blm + B2ps + 1331 + 4k + 135c + X + ? (1) Loveman estimated that the contribution of information technology capital to output (135) was approximately zero over the five year period studied in almost every subsample he examined. His findings were fairly robust to a number of variations on his basic formulation and suggest a paradox: while firms were demonstrating a voracious appetite for a technology experiencing radical improvements, measured productivity gains were insignificant. While Lovemans dependent variable was final output, Barua, Kriebel and Mukhopadhyay (1991) traced Lovemans results back a step by looking at ITs effect on intermediate variables such as capacity utilization, inventory turnover, quality, relative price and new product introduction. Using the same data set, they found that IT was positively related to three of these five intermediate measures of performance, although the magnitude of the effect was generally too small to measurably affect final output. Dudley and Lasserre (1989) also found econometric support for the hypothesis that better communication and information reduce the need for inventories, without explicitly relating this to bottom-line performance measures. Using a different data set, Weill (1988) was also able to disaggregate IT by use, and found that significant productivity could be attributed to transactional types of information technology (e. g. data processing), but was unable to identify gains associated with strategic systems (e. g. sales support) or informational investments (e. g. email infrastructure). Draft: 1/29/92 page 12 Information Technology and Productivity Morrison and Berndt have written two papers using a broader data set from the US Bureau of Economic Analysis (BEA) that encompasses the whole U. S. manufacturing sector. The first (Morrison Berndt, 1990), which examined a series of highly parameterized models of production, found evidence that every dollar spent on IT delivered, on average, only about $0. 80 of value on the margin, indicating a general overinvestment in IT. Their second paper (Berndt Morrison, 1991b) took a less structured approach and examined broad correlations of IT with labor productivity and multifactor productivity, as well as other variables. This approach did not find a significant difference between the productivity of IT capital and other types of capital for a majority of the 20 industry categories examined. They did find that IT was correlated with significantly increased demand for skilled labor. Finally, Siegel and Griliches (1991) used industry and establishment data from a variety of sources to examine several possible biases in conventional productivity estimates. Among their findings was a positive simple correlation between an industrys level of investment in computers and its multifactor productivity growth in the 1980s. They did not examine more structural approaches, in part because of troubling concerns they raised regarding the reliability of the data and government measurement techniques. Draft: 1/29/92 page 13 Draft: 1/29/92 Information Technology and Productivity page 14 Table 2: Studies of IT in Manufacturing Study Data source Findings Loveman, 1988) PIMS/MPIT IT investments added nothing to output (Weill, 1988) Valve manufacturers Contextual variables affect IT performance (Dudley Lasserre, IT and communication reduces inventories 1989) (Morrison Berndt, BEA IT marginal benefit is 80 cents per dollar 1990) invested (Barua, Kriebel PIMS/MPIT IT improved inte rmediate outputs, if not Mukhopadhyay, 1991) necessarily final output (Berndt Morrison, BEA, BLS IT not correlated with higher multi-factor 1991 b) productivity in most industries, more labor use (Siegel Griliches, Multiple govt sources IT using industries tend to be more 1991) productive; government data is unreliable Comment All authors make a point of emphasizing the limitations of their respective data sets. The MPIT data, which both Loveman and Barua, Kriebel and Mukhopadhyay use, can be particularly unreliable. As Loveman is careful to point out, his results are based on dollar denominated outputs and inputs, and therefore depend on price indices which may not accurately account for changes in quality or the competitive structure of the industry. The results of both of these studies may also be unrepresentative to the extent that the relatively short period covered by the MPIT data, 1978- 83, was unusually turbulent. The BEA data may be somewhat more dependable but are subject to subtle biases due to the unintuitive techniques used to aggregate and classify establishments. One of Siegel and Griliches principal conclusions was that after auditing the industry numbers, we found that a non-negligible number of sectors were not consistently defined over time. However, the generally reasonable estimates derived for the other, non-information technology factors of production in each of the studies indicate that there may indeed be something worrisome, or at least special, about information technology. Additional III Information Technology and Productivity econometric work would go far toward establishing whether these results are an artifact of the data or a genuine puzzle in need of more thorough analysis. The Productivity of Information Technology Capital in Services The Issues It has been widely reported that most of the productivity slowdown is concentrated in the service sector (1991; Roach, 1987b; Schneider, 1987). Before about 1970, service productivity growth was comparable to that in manufacturing, but since then the trends have diverged significantly. Meanwhile services have dramatically increased as a share of total employment and to a lesser extent, as a share of total output. Because services use over 80% information technology, this has been taken as indirect evidence of poor information technology productivity. The studies that have tried to assess IT productivity in the service sector are summarized in table 3. One of the first studies of ITs impact was by Cron and Sobol (1983), who looked at a sample of wholesalers. They found that on average, ITs impact was not significant, but that it seemed to be associated with both very high and very low performers. This finding has engendered the hypothesis that IT tends to reinforce existing management 11 According to government statistics, from 1953 to 1968, labor productivity growth in services averaged 2. 56%, vs. 2. 61% in manufacturing. For 1973 to 1979, the figures are 0. 68% vs. 1. 53%, respectively (Baily, 1986). However, a recent study (Gordon, 1989) suggests that measurement errors in US statistics systematically understate service productivity growth relative to manufacturing. More recently, computers definitely have caused some divergence in the statistics on manufacturing and service productivity, but for a very different reason. Because of the enormous quality improvements attributed to the computers, the nonelectrical machinery category (containing the computer producing industry) has shown tremendous growth. As a result, while overall manufacturing productivity growth has rebounded from about 1. 5% in the 1970s to 3. 5% in the 1980s, about two thirds of this increase is simply attributable to the greater production (as opposed to use) of computers (see comment by William Nordhaus on Baily Gordon, 1988 and section III. A of this paper) Draft: 1/29/92 page 15 Information Technology and Productivity approaches, helping well-organized firms succeed but only further confusing anagers who havent properly structured production in the first place. Strassman (1985; 1990) also reports disappointing evidence in several studies. In particular, he found that there was no correlation between IT and return on investment in a sample of 38 service sector firms: some top performers invest heavily in IT, while some do not. In his most recent book (1990), he concludes that there is no relation between spending for computers, profits and productivity. Roachs widely cited research on white collar productivity, discussed above, focused principally on ITs dismal performance in the service sector (1991; 1987a; 1987b; 1988; 1989a; 1989b). Roach argues that IT is an effectively used substitute for labor in most manufacturing industries, but has paradoxically been associated with bloating whitecollar employment in services, especially finance. He attributes this to relatively keener competitive pressures in manufacturing and foresees a period of belt-tightening and restructuring in services as they also become subject to international competition. There have been several studies of ITs impact on the performance of various types of financial services firms. A recent study by Parsons, Gottlieb and Denny (1990) estimated a production function for banking services in Canada and found that overall, the impact of IT on multifactor productivity was quite low between 1974 and 1987. They speculate that IT has positioned the industry for greater growth in the future. Similar conclusions are reached by Franke (1987), who found that IT was associated with a sharp drop in capital productivity and stagnation in labor productivity, but remained optimistic about the future potential of IT, citing the long time lags associated with previous technological transformations such as the conversion to steam power. On the other Draft: 1/29/92 page 16 Information Technology and Productivity hand, Brand (1982), using BLS data and techniques, found that moderate productivity growth had already occurred in banking. Harris and Katz (1988; 1989) and Bender (1986) looked at data on the insurance industry from the Life Office Management Association Information Processing Database. They found a positive relationship between IT expense ratios and various performance ratios although at times the relationship was quite weak. Several case studies of ITs impact on performance have also been done, including one by Weitzendorf Wigand (1991) which developed a model of information use in two service corporations, and a study of an information services firm by Pulley and Braunstein (1984), which found an association with increased economies of scope. Table 3: Studies of IT in Services Study Data source Findings (Brand Duke, 1982) BLS Productivity growth of 1. 3%/yr in banking (Cron Sobol, 1983) 138 medical supply Bimodal distribution among high IT wholesalers investors: either very good or very bad (Pulley Braunstein, Monthly data from Significant economies of scope 1984) information service firm (Clarke, 1985) Case study Major business process redesign needed to reap benefits in investment firm Strassman, 1985; Computerworld survey No correlation between various IT ratios Strassman, 1990)] of 38 companies and performance measures (Bender, 1986) LOMA insurance data on Weak relationship between IT and variou s 132 firms performance ratios (Bresnahan, 1986) Financial services firms Large gains in imputed consumer welfare (Franke, 1987) Finance industry data (Roach, 1991; Roach, Principally BLS, BEA Vast increase in IT capital per information 1987b; Roach, 1989a) worker while measured output decreased (Harris Katz, 1988; LOMA insurance data Weak positive relationship between IT and Harris Katz, 1989) for 40 various performance ratios Noyelle, 1990) US and French industry Severe measurement problems in services (Parsons, Gotlieb Internal operating data IT coefficient in translog production Denny, 1990) from 2 large banks function small and often negative (Weitzendorf Interviews at 2 Interactive model of information use Wigand, 1991) companies Draft: 1/29/92 page 17 Information Technology and Productivity Comment Measurement problems are even more acute in services than in manufacturing. In part, this arises because many service transactions are idiosyncratic, and therefore not subject to statistical aggregation. Unfortunately, even when abundant data exist, classifications sometimes seem arbitrary. For instance, in accordance with a fairly standard approach, Parsons, Gottlieb and Denny (1990) treated time deposits as inputs into the banking production function and demand deposits as outputs. The logic for such decisions is often difficult to fathom and subtle changes in deposit patterns or classification standards can have disproportionate impacts. The importance of variables other than IT also becomes particularly apparent in some of the service sector studies. Cron and Sobols finding of a bimodal distribution suggests that some variable was left out of the equation. Furthermore, researchers and consultants have increasingly emphasized the theme of re-engineering work when introducing major IT investments (Davenport Short, 1990; Hammer, 1990). A frequently cited example is the success of the Batterymarch services firm, as documented by Clarke (1985). Batterymarch used information technology to radically restructure the investment management process, rather than simply overlaying IT on existing processes. In sum, while a number of the dimensions of the information technology productivity paradox have been overstated, the question remains as to whether information technology is having the positive impact expected. In particular, better measures of information worker productivity are needed, as are explanations for why information technology capital hasnt clearly improved firm-level productivity in manufacturing and services. We now examine four basic approaches taken to answer these questions. Draft: 1/29/92 page 18 Information Technology and Productivity Leading Explanations for the Paradox Although it is too early to conclude that ITs productivity contribution has been subpar, a paradox remains in our inability to unequivocally document any contribution after so much effort. The various explanations that have been proposed can be grouped into four categories: 1) Mismeasurement of outputs and inputs, 2) Lags due to learning and adjustment, 3) Redistribution and dissipation of profits, 4) Mismanagement of information and technology. The first two explanations point to shortcomings in research, not practice, as the root of the productivity paradox. It is possible that the benefits of IT investment are quite large, but that a proper index of its true impact has yet to be analyzed. Traditional measures of the relationship between inputs and outputs fail to account for non-traditional sources of value. Second, if significant lags between cost and benefit may exist, then short-term results look poor but ultimately the pay-off will be proportionately larger. This would be the case if extensive learning, by both individuals and organizations, were needed to fully exploit IT, as it is for most radically new technologies. A more pessimistic view is embodied in the other two explanations. They propose that there really are no major benefits, now or in the future, and seek to explain why managers would systematically continue to invest in information technology. The redistribution argument suggests that those investing in the technology benefit privately but at the expense of others, so no net benefits show up at the aggregate level. The final type of explanation examined is that we have systematically mismanaged information Draft: 1/29/92 page 19 Draft: 1/29/92 Information Technology and Productivity page 20 technology: there is something in its nature that leads firms or industries to invest in it when they shouldnt, to misallocate it, or to use it to create slack instead of productivity. Each of these four sets of hypotheses is assessed in turn in this section. Measurement Errors The Issues The easiest explanation for the low measured productivity of information technology is simply that were not properly measuring output. Denison (1989) makes a wide-ranging case that productivity and output statistics can be very unreliable. Most economists would agree with the evidence presented by Gordon and Baily (1989), and Noyelle (1990) that the problems are particularly bad in service industries, which happen to own the majority of information technology capital. It is important to note that measurement errors need not necessarily bias IT productivity if they exist in comparable magnitudes both before and after IT investments. However, the sorts of benefits ascribed by managers to information technology increased quality, variety, customer service, speed and responsiveness are precisely the aspects of output measurement that are poorly accounted for in productivity statistics as well as in most firms accounting numbers. This can lead to systematic underestimates of IT productivity. The measurement problems are particularly acute for IT use in the service sector and among white collar workers. Since the null hypothesis that no improvement occurred wins by default when no measured improvement is found, it probably is not coincidental that service sector and information worker productivity is considered more of a problem than manufacturing and blue collar productivity, where measures are better. III Information Technology and Productivity a. Output Mismeasurement As discussed in the introduction, when comparing two output levels, it is important to deflate the prices so they are in comparable real dollars. Accurate price adjustment should remove not only the effects of inflation but also adjust for any quality changes. Much of the measurement problem arises from the difficulty of developing accurate, quality-adjusted price deflators. Additional problems arise when new products or features are introduced, not only because they have no predecessors for direct comparison, but also because variety itself has value, and that can be nearly impossible to measure. The positive impact of information technology on variety and the negative impact of variety on measured productivity has been econometrically and theoretically supported by Brooke (1991). He argues that lower costs of information processing have enabled companies to handle more products and more variations of existing products. However, the increased scope has been purchased at the cost of reduced economies of scale and has therefore resulted in higher unit costs of output. For example, if a clothing manufacturer chooses to produce more colors and sizes of shirts, which may have value to consumers, existing productivity measures rarely account for such value and will typically show higher productivity in a firm that produces a single color and size. 12 Higher prices in industries with increasing product diversity is likely to be attributed to inflation, despite the real increase in value provided to consumers. In services, the problem of unmeasured improvements can be even worse than in manufacturing. For instance, the convenience afforded by twenty-four hour ATMs is frequently cited as an unmeasured quality improvement (Banker Kauffman, 1988). 12The same phenomenon suggests that much of the initial decline in productivity experienced by centrally-planned economies when they liberalize is spurious. Draft: 1/29/92 page 21 Draft: 1/29/92 Information Technology and Productivity page 22 How much value has this contributed to banking customers? Government statistics implicitly assume it is all captured in the number of transactions, or worse, that output is a constant multiple of labor input! (Mark, 1982) In a case study of the finance, insurance and real estate sector, where computer usage and the numbers of information workers are particularly high, Baily and Gordon (Baily Gordon, 1988) identified a number of practices by the Bureau of Economic Analysis (BEA) which tend to understate productivity growth. Their revisions add 2. 3% per year to productivity between 1973 and 1987 in this sector. 13 b. Information Technology Stock Mismeasurement A related measurement issue is how to measure information technology stock itself. For any given amount of output, if the level of IT stock used is overestimated, then its unit productivity will appear to be less than it really is. Denison (1989) argues that the rapid decreases in the real costs of computer power are largely a function of general advances in knowledge and as a result, the government overstates the decline in the computer price deflator by attributing these advances to the producing industry. If this is true, the real quantity of computers purchased recently is not as great as statistics show, while the real quantity purchased 20 years ago is higher. The net result is that much of the productivity improvement that the government attributes to the computer-producing industry, should be allocated to computer-using industries. Effectively, computer users have been overcharged for their recent computer investments in the government productivity calculations. c. Input Mismeasurement 13 They also add 1. 1% to productivity growth before 1973. III Information Technology and Productivity A third issue is the measurement of other inputs. If the quality of work life is improved by computer usage (less repetitive retyping, tedious tabulation and messy mimeos), then theory suggests that proportionately lower wages can be paid. Thus the slow growth in clerical wages may be an artifact of unmeasured improvements in work life that are not accounted for in government statistics. Baily and Gordon (1988) conjecture that this may also be adding to the underestimation of productivity. To the extent that complementary inputs, such as software, or training, are required to make investments in information technology worthwhile, labor input may also be overestimated. Although spending on software and training yields benefits for several years, it is generally expensed in the same year that computers are purchased, artificially raising the short-term costs associated with computerization. In an era of annually rising investments, the subsequent benefits would be masked by the subsequent expensing of the next, larger, round of complementary inputs. On the other hand, IT purchases may also create long-term liabilities in software and hardware maintenance that are not fully accounted for, leading to an underestimate of ITs impact on costs. d. Methodological Concerns In addition to data problems, the methodology used to assess IT impacts can also significantly affect the results. Alpar and Kim (1990) applied two approaches to the same data set. One approach was based on key ratios and the other used a cost function derived from microeconomic theory. 4 They found that the key ratios approach, which had been 14 An example of the key ratios approach is examining the correlation between the ratio of information processing expenses to total expenses and the ratio of total operating expenses to premium income, as Bender [, 1986 #295] did. An example of the cost function approac h is to use duality theory to derive a cost function from a production function, such as the Cobb-Douglas function described above that was used by Loveman [, 1988 #58]. The exact function used by Alpar and Kim was the translog cost function, which is more general, but which requires the estimation of a large number of parameters. Draft: 1/29/92 page 23 Draft: 1/29/92 Information Technology and Productivity page 24 previously used by Bender (1986) and Cron and Sobol (1983), among others, could be particularly misleading. In an effort to model IT effects more rigorously, several papers have called for the use of approaches derived from microeconomics. Cooper and Mukhopadhyay (1990) advocate a production function approach while frontier methodologies such as data envelopment analysis (DEA) have been proposed by Chismar and Kriebel (1985) and Stabell (1982). A very different approach has been applied in an article by Tim Bresnahan (1986). Recognizing the inherent difficulties in measurement in the financial services sector, Bresnahan made no attempt to directly measure output. Instead, he inferred it from the level of spending on mainframes under the assumption that the unregulated parts of the financial services sector were competitive and were therefore acting as agents for consumers. He found that welfare gains were five times greater than expenditures through 1973. Bresnahans findings serve to underscore the size of the gap between the benefits perceived by the consumers of IT and those measured by researchers using conventional techniques. Comments Output measurement is undoubtedly problematic. Rapid innovation has made information technology-intensive industries particularly susceptible to the problems associated with measuring quality changes and valuing new products. The way productivity statistics are currently kept can lead to bizarre anomalies: to the extent that ATMs lead to fewer checks being written, they can actually lower productivity statistics. Increased variety, improved timeliness of delivery and personalized customer service are additional benefits that are poorly represented in productivity statistics. These are all qualities that are particularly likely to be enhanced by information technology. Because III Information Technology and Productivity information is intangible, increases in the implicit information content of products and services are likely to be under-measured compared to increases in materials content. Nonetheless, some analysts are skeptical that measurement problems can explain much of the slowdown. They point out that by many measures, service quality has gone down, not up. 15 Furthermore, they question the value of variety when it takes the form of six dozen brands of breakfast cereal. Indeed, models from industrial organization theory suggest that while more variety will result from the flexible manufacturing and lower search costs enabled by IT, the new equilibrium can exhibit excess variety making consumers worse off (Tirole, 1988). Denison is in the minority in his view that the government is overestimating the improvements in computing power per dollar. A study by Gordon (1987) found that, if anything, computer prices are declining slightly faster than government statistics show. More recently, a study by Triplett (1989) considered Denisons criticisms but in the end supported the BEA methods. 16 Ultimately, a closer look at productivity statistics reminds researchers that the poor showing of information technology may not rest on an entirely solid foundation simply because the statistics are not as reliable as we would like. Lags The Issues 5 Nordhaus in a comment on Baily and Gordon (1988) recalls the doctors house call, custom tailoring, and windshield wipin g at gas stations, among other relics. 16 Most economists appear to be less concerned than Denison about this bias in the BEA statistics. For instance, a consensus of economists at the June, 1990 NBER conference on productivity concurred with Tripletts conclusions. Draft: 1/29/92 page 25 Information Technology and Productivity A second explanation for the paradox is that the benefits from information technology can take several years to show up on the bottom line. a. Evidence of Lags The idea that new technologies may not have an immediate impact is a common one in business. For instance, a survey of executives suggested that many expected it to take at much as five years for information technology investments to pay-off (Nolan/Norton, 1988). This accords with a recent econometric study by Brynjolfsson et al. (1991a) which found lags of two to four years before the strongest organizational impacts of information technology were felt. Loveman (1988) also found slightly higher, albeit still very low, productivity when small lags were introduced. In general, while the benefits from investment in infrastructure can be large, they are indirect and often not immediate. b. Theoretical Basis for Lags The existence of lags has some basis in theory. Because of its unusual complexity and novelty, firms and individual users of information technology may require some experience before becoming proficient (Curley Pyburn, 1982). According to dynamic models of learning-by-using, the optimal investment strategy sets short term marginal costs greater than short-term marginal benefits. This allows the firm to ride the learning curve and reap benefits analogous to economies of scale (Scherer, 1980). If only short-term costs and benefits are measured, then it might appear that the investment was inefficient. Viewed in this framework, there is nothing irrational about the experimentation phase firms are said to experience in which rigorous cost/benefit analysis is not undertaken (Nolan/Norton, 1988). Because future information technology investments tend to be large Draft: 1/29/92 page 26 Information Technology and Productivity relative to current investments, the learning effect could potentially be quite substantial. A similar pattern of costs and benefits is predicted by an emerging literature that treats investments in information technology as options, with short term costs, but with the potential for long-term benefits (Kambil, Henderson Mohsenzadeh, 1991). Comments One way to address the measurement problem associated with complementary inputs (see section III. A. 1 . c) is to introduce appropriate lags in the estimation procedure. For instance, the purchase of a mainframe computer must generally precede the development of mainframe database software. Software, in turn, usually precedes data acquisition. Good decisions may depend on years of acquired data and may not instantaneously lead to profits. 17 Optimally, a manager must take into account these longterm benefits when purchasing a computer and so must the researcher seeking to verify the benefits of computerization. If managers are rationally accounting for lags, this explanation for low information technology productivity growth is particularly optimistic. In the future, not only should we reap the then-current benefits of the technology, but also enough additional benefits to make up for the extra costs we are currently incurring. However, the credibility of this explanation is somewhat undermined by the fact that American managers have not been noted for their ability to postpone benefits to the future. On the contrary, the risk and uncertainty associated with new technologies can make risk-averse managers require higher, not lower, rates of return before they will invest. Increased familiarity, ease-of-use 17 It has been observation that firms that spend proportionately more money on software appear to be more profitable (Computer Economics Report, 1988) If firms go through a hardware buying phase followed by an applications phase, then this may have more to due with firms being in different stages of a multi-year process than with different technology strategies. Draft: 1/29/92 page 27 Draft: 1/29/92 Information Technology and Productivity page 28 and end-user computing may lead to reduced lags between the costs and benefits of computerization in the future. Redistribution The Issues A third possible explanation is that information technology may be beneficial to individual firms, but unproductive from the standpoint of the industry as a whole or the economy as a whole: IT rearranges the shares of the pie without making it any bigger. a. The Private Value of Information Can Exceed its Social Value There are several arguments for why redistribution may be more of a factor with IT investments than for other investments. For instance, information technology may be used disproportionately for market research and marketing, activities which can be very beneficial to the firm while adding nothing to total output (Baily Chakrabarti, 1988; Lasserre, 1988). Furthermore, economists have recognized for some time that, compared to other goods, information is particularly vulnerable to rent dissipation, in which one firms gain comes entirely at the expense of others, instead of by creating new wealth. As Hirshleifer (1971) pointed out, advance knowledge of demand, supply, weather or other conditions that affect asset prices can be very profitable privately even without increasing total output. This will lead to excessive incentives for information gathering. In a similar spirit, races to be the first to apply an innovation can also lead to rent dissipation (Fudenberg Tirole, 1985). The rapid-fire pace of innovation in the information technology industry might also encourage this form of wasteful investment. III Information Technology and Productivity b. Models of Redistribution Baily and Chakrabarti (1988) run a simulation under the assumption that a major share of the private benefits of information technology result from redistribution. The results are broadly consistent with the stylized facts of increased amounts of information technology and workers without increases in total productivity. 2. Comments Unlike the other possible explanations, the redistribution hypothesis would not explain any shortfall in IT productivity at the firm-level: firms with inadequate IT budgets would lose market share and profits to high IT spenders. In this way, an analogy could be made to models of the costs and benefits of advertising. It is interesting to note that most of the reasons for investing in information technology given by the articles in the business press involve taking profits from competitors rather than lowering costs. 18 Mismanagement The Issues A fourth possibility is that, on the whole, information technology really is not productive at the firm level. The investments are made nevertheless because the decision- 18 Porter and Millar, 1985, is not atypical. They emphasize competitive advantage gained by changes in industry structure, product and service differentiation and spawning of new businesses while devoting about 5% of their space to cost savings enabled by IT. Others ignore cost reductions entirely. Draft: 1/29/92 page 29 Draft: 1/29/92 Information Technology and Productivity page 30 makers arent acting in the interests of the firm. Instead, they are a) increasing their slack, b) signalling their prowess or c) simply using outdated criteria for decision-making. a. Increased scope for managerial slack Many of the difficulties that researchers have in quantifying the benefits of information technology would also affect managers (Baily, 1986a; Gremillion Pyburn, 1985). As a result, they may have difficulty in bringing the benefits to the bottom line if output targets, work organization and incentives are not appropriately adjusted (McKersie Walton, 1988). The result is that information technology might increase organizational slack instead of output or profits. This is consistent with arguments by Roach (1989a) that manufacturing has made better use of information technology than has the service sector because manufacturing faces greater global competition, and thus tolerates less slack. b. Information consumption as a signal Feldman and March (1981) also point out that good decisions are generally correlated with significant consumption of information. If the amount of information requested is more easily observable than the quality of decisions, a signalling model will show that too much information will be consumed. c. Use of outdated management heuristics A related argument derives from evolutionary models (Nelson, 1981). The difficulties in measuring the benefits of information and information technology discussed above may also lead to the use of heuristics, rather than strict cost/benefit accounting to set III Information Technology and Productivity levels of information technology investments. 9 Our current institutions, heuristics and management principles evolved largely in a world with little information technology. The radical changes enabled by information technology may make these institutions outdated (see e. g. (Clarke, 1985; Franke, 1987)). For instance, a valuable heuristic in 1960 might have been get all readily available information before making a decision. The same heuristic today could lead to information overload and chaos (Thurow, 1987). Indeed, Ayres (1989) argues that the rapid speed-up enabled by information technology creates unanticipated bottlenecks at each human in the information processing chain. More money spent on information technology wont help until these bottlenecks are addressed. Indeed, researchers have found that a successful IT implementation process must not simply overlay new technology on old processes (Davenport Short, 1990). At a broader level, several researchers suggest that our currently low productivity levels are symptomatic of an economy in transition, in this case to the information era (David, 1989; Franke, 1987; Gay Roach, 1986). For instance, David makes an analogy to the electrification of factories at the turn of the century. Major productivity gains did not occur for twenty years, when new factories were designed and built to take advantage of electricitys flexibility which enabled machines to be located based on work-flow efficiency, instead of proximity to waterwheels, steam-engines and power-transmitting shafts and rods. Comments While the idea of firms consistently making inefficient investments in IT is anathema to the neoclassical view of the firm as a profit-maximizer, it can be explained 19 Indeed, a recent review of the techniques used by major companies to justify information technology investments [Yamamoto, 1991] revealed surprisingly little formal analysis. See Clemons [, 1991 #284] for an assessment of the IT justification process. Draft: 1/29/92 page 31 Draft: 1/29/92 Information Technology and Productivity page 32 formally by models such as agency theory, employment signalling models and evolutionary economics, which treat the firm as a more complex entity. The fact that firms continue to invest large sums in the technology suggests that the individuals within the firm that make investment decisions are getting some benefit or at least believe they are getting some benefit from IT. For instance, a model of how IT enables managerial slack can be developed using agency theory. The standard result in this literature is that when managers (agent) incentives are not aligned with shareholder (principal) interests, suboptimal investment decisions and effort can result. One little noted feature of most agency models is that the incentives for agents to acquire additional information generally exceed the social benefits. This is because agents can use the information to earn rents and to short-circuit the incentive scheme (Brynjolfsson, 1990a). Thus, information technology investments may be very attractive to managers even when they do little to boost productivity. To the extent that competition reduces the scope for managerial slack, the problem is alleviated. In general, however, we do not yet have comprehensive models of the internal organization of the firm and researchers, at least in economics, are mostly silent on the sorts of inefficiency discussed in this section. Conclusion Summary Research on information technology

Sunday, November 24, 2019

Memories of the Civil War essays

Memories of the Civil War essays My name is Henry Campbell, and I have just enlisted in the Union Army! But first, let me tell you a little bit about myself. I was born in Muncle, Indiana, June 20th, 1842. Right now I am 19 years of age and have been assigned as Corporal of the Indiana Light Battery, because of my good leadership, courage, and a strong lust for war. My parents are Harvey and Janet Campbell, and before the war had started, we ran a successful dry goods store. Our economic standing is pretty good I would say, and we can afford to buy what we like. My job used to be as a dry goods clerk for my father, and I had made a decent amount of money, but I felt that I needed more than just money to live a good life. My parents did have some disagreements with me joining the Union Army, but I had finally convinced them to let me go. I had nothing to do with my life, just like many of my friends who had done the same thing, join the war and fight for whats right; freedom, equality, and to be given a true name in history that would be heard from sea to shining sea. I had one sibling, a brother, whose name was Gregory, who died at the Battle of Bull Run at the age of 21, trying to defend the industrious Union lands. Because of this I have gone off to fight against the dirty Southerners, to avenge my brothers death, to free the thousands if not millions of slaves, and to reunite this beautiful land. As you can see, my stand on slavery is that I want all of the innocent slaves to be free, to have them regain their freedom, to stop having them fear whites, and to let them live freely and equally with white men. My education is good since I have gone to the Perdue University, one of the finest universities in all of Indiana. In fact, I had just finished Perdue about four months ago, time sure does fly fast here. And my religion is Christian. I support the Union because they understand how life should be. What I mean is that everyday in the Union ...

Saturday, November 23, 2019

French Passive Constructions

French Passive Constructions Passive constructions are those in which a verbs action is performed on the subject, rather than the subject performing the action as in active (normal) constructions. The passive voice is the most common French passive construction, but there are a couple of others to watch out for as well. Other French Passive Constructions Passive Infinitive: Even though the French infinitive translates as to verb, the French infinitive sometimes needs to be preceded by a preposition. This is the case with the passive infinitive, which is commonly used with indefinite and negative words, such as Il ny a rien manger - Theres nothing to eat.Passive Reflexive: In the passive reflexive construction, a normally non-reflexive verb is used reflexively in order to express the passive nature of the action, as in Ça se voit - Thats obvious.Reflexive Causative: The reflexive causative (se faire infinitive) indicates something that happens to the subject, either per someone elses implied action or wish or unintentionally. Passive Reflexive in Detail In French (and English) it is preferable to avoid the passive voice. French has numerous constructions which are commonly used in place of the passive voice, one of which is the passive reflexive. The French passive reflexive is used in place of the passive voice in order to avoid naming the agent of a verb. The passive reflexive is formed with a noun or pronoun, then the reflexive pronoun se, and finally the appropriate verb conjugation (third-person singular or plural). In essence, this construction uses a non-reflexive verb reflexively in order to demonstrate the passive nature of the action. The literal translation of the French passive reflexive (something does something to itself) is strange to English ears, but its important to recognize this construction and understand what it actually means. Ça se voit. - Thats obvious.Ça saperà §oit peine. - Its hardly noticeable.Cela ne se dit pas. -  That isnt said.Ce livre se lit souvent. - This book is often read.Comment se prononce ce mot  ?  - How is this word pronounced?Comment à §a sà ©crit  ? (informal) - How is that spelled?Un homme sest rencontrà © hier. - A man was found yesterday.Un coup de tonnerre sest entendu. - A crash of thunder was heard.Les mà »res ne se vendent pas ici. - Blackberries are not sold here.Ce produit devrait sutiliser quotidiennement. - This product should be used daily.

Thursday, November 21, 2019

Juries are fundamental to our adversarial criminal justice process and Essay

Juries are fundamental to our adversarial criminal justice process and the only real guarantee of fairness between the State (as prosecutor) and the Individual - Essay Example In the case of a trial by jury, a decision is rendered by a group of nine individuals who may be drawn from different backgrounds, thereby bringing a depth of understanding of problems that single judges may not possess. According to Janata, â€Å"it is the mix of different persons with different backgrounds and psychological traits in the jury room that produces the desired results.† (Janata, 1976: 595-596). This feature may imbue juries with a greater ability to discern and make accurate determinations about the credibility of witnesses and the validity of arguments being offered, especially in criminal trials, especially because a jury is able to evaluate witnesses, plaintiffs and defendants from their perspective as ordinary citizens. Judges may sometimes get mired in the legal formalities and procedures to such an extent it may impede their intuitive judgments. There is also a greater possibility of bias arising when a single judge makes a decision on a case, particularly when it is a criminal case. In the case of a jury trial, the decision rendered is the cumulative effect of group deliberation, after the input and reflections from the different members comprising the jury are assimilated. Hence, a jury has the advantage of collective recall and weighing up of factors impacting upon a case. Since each fact is explored and discussed in a group, it allows a group scrutiny where bias is more likely to be eliminated than in the case of a single Judge. Jury trials have been advocated as an effective measure to bring justice to citizens, especially in criminal trials where jurors are believed to be better able to make assessments and judgments about character and believability of witnesses. Gastill and Weiser (2006) argue in favor of jury trials on the basis that being a part of a jury can spur greater levels of civil engagement from juror citizens and thereby provide a spur for real, deliberative democracy. While jurors do not make policy decisions, the

CLOCKS Assignment Example | Topics and Well Written Essays - 500 words

CLOCKS - Assignment Example Quartz crystals can be set vibrating with an electric current with crystal vibrations ranging from 2.5 to 5 million times a second. This means that vibrations in quartz clocks allow them to get time measurement to an accuracy down to a millionth of a second. The present day quartz clock developed in the early 1900s, clock needs certain basic requirements for it to work. First, it must have a power source that will allow it to create motion. Second, the clock must have a time base which provides a periodic oscillation dictating the measurement of time. The time base is essentially the device that controls clock signals. Lastly, it must have a way to convey the information generated by the time base and be able to display this information to actually tell time. During the 19th century until the middle of the 20th century, the pendulum clock was the standard time teller. The principle of the pendulum at work is such that its swing is independent of the amplitude, or size, of the swing. In effect, the only factors affecting the amplitude are the length of the pendulum and the force of gravity. Each swing of the pendulum releases a spring-loaded ratchet in the clock mechanism, which drives the hands. If the pendulum is left alone, frictional forces would act upon it and so it will eventually stop. Thus, a pendulum clock must contain a weight-driven or electrically operated mechanism that periodically pushes the pendulum to keep it swinging. Pendulum clocks and earlier versions of watches known as chronometers are quite cumbersome because their movement stops when they are not wound. In addition, pendulum clocks are highly dependent on external forces such as the force of gravity and temperature. Thus, quartz clocks and watches are the more popular options today. Quartz clocks are battery powered with gears regulated by a tiny crystal of quartz. When the battery sends electricity to the quartz crystal through an electronic circuit, the quartz crystal oscillates at

Wednesday, November 20, 2019

Old Testament Book Summaries Assignment Example | Topics and Well Written Essays - 250 words

Old Testament Book Summaries - Assignment Example During this time the Israelites depended on manna from heaven to feed them. Only a daily amount could be collected. Faith in Yaweh was reinforced through the law. When Moses brought down the Ten Commandments and found the Israelites making an idol, he broke the stone tablets. The law was based around the Ten Commandments. Moses’ death before entering the Promise Land was due to his disobedience. Joshua actually led the Israelites into the Promise Land. The genre of this book is the imparting of wisdom. Key themes are Solomon’s belief that the only way for happiness is searching for God. The mistakes of his life are outlined and his path to happiness was a relationship with God. Key events are vague. The author speaks of obtaining wealth, women, and everything else seemingly desired by man. He then talks of not being happy with these material objects. The key figure in this book can only be accurately describes as a Son of David. Many speculate that this means Solomon. Since Solomon became king after David and was granted the gift of wisdom, this is logical. This book also contains the famous passages about their being a time for everything. The time to sow and everything else is written in this book. The author ends the book with the conclusion that everything under the sun is futile. Seeking God is the only way to find happiness. The genre of this book is narrative. The key theme is familial duty and loyalty. Key events are the marriage of Elimelech and Naomi’s sons Mahlon and Chilion to Ruth and Orpah, the father and sons’ death, Naomi’s return to Bethlehem with Ruth, Ruth’s gleaning of Boaz’s fields, and her eventual marriage to Boaz. Key people are Ruth, the main character, Naomi, and Boaz. This book deals with Ruth’s loyalty to Naomi. Ruth could have gone home after the death of her husband, but chose to go with Naomi to Bethlehem. As a result of her loyalty, Boaz gave her the job of gleaning his field after workers would reap the

Tuesday, November 19, 2019

Different Views on Assisted suicide (I Agree with Assisted Suicide) Essay

Different Views on Assisted suicide (I Agree with Assisted Suicide) - Essay Example In other words, medical profession is intended for saving life rather than destroying it. On the other hand, there are many people who believe that assisted suicide should be allowed legally in order to avoid the pain, agony and discomfort of the patients in no hope conditions. In their opinion, nobody wants to sustain their lives in miserable conditions, if the hope for a survival is completely out of question. Under such circumstances, it is better to assist those people in finishing their lives rather than forcing them to suffer the pain and agony further. In this paper I argue in favour of assisted suicide after analysing both the sides of the issue. Arguments against assisted suicide The major argument against assisted suicide is with respect to the ethical issues involved in it. â€Å"Many faith groups within Christian, Muslim, Jewish and other religions believe that God gives life and therefore only God should take it away† (Info: Ethical aspects of PAS, n. d.). Religio ns argue that life is the blessing of God and man has no authority over it. God has created human life on earth for certain missions. He takes the life back only after the completion of the mission assigned to each person. Disallowing God to complete his missions is unethical according to religions. Religions also argue that a life is waiting for every human after his death and in order to prepare for that life, God has given miseries to human in his present life. It is the duty of the human to go through all these mysteries in order to claim an eternal life after death. Religions believe that assisted suicide will prevent a person from attaining eternal life or salvation. Another argument raised by critics of assisted suicide is based on the importance of human life on earth. Even though, human succeeded in collecting information about outer universe or planets, still his knowledge about this universe is extremely limited. Science does not have any idea about whether life persists in other planets or not. In other words, earth is the only planet in which life exists as per the evidences we have until now. Thus, life becomes most precious thing in this universe. Since human has superior intellectual power, human life seems to be the most important one among other life forms and it should not be destroyed under any circumstances. The chemistry of life is still unknown to science even though we are living in a most advanced era at present. Human succeeded in unveiling many mysteries; however, the secret behind life is still uncatchable to them. Once the life is being destroyed, nobody can give it back to a person. In short, human life is the most important thing in this universe and it should not be destroyed, according to the arguments of critics of assisted suicide. The third argument against assisted suicide is related to philosophy. Many of the prominent philosophers like Immanuel Kant, John Locke etc were argued against the assisted suicide in one way or ot her. Locke argued that life, like liberty, represents an inalienable right, which cannot be taken from, or given away by, an individual. For Kant, suicide was a paradigmatic example of an action that violates moral responsibility. Kant believed that the proper end of rational beings requires self-preservation, and that suicide would therefore be inconsistent with the fundamental value of human life (Chapter 5: The Ethical Debate, 2001,

Sunday, November 17, 2019

Reflection paper Movie Review Example | Topics and Well Written Essays - 750 words

Reflection paper - Movie Review Example There are many features of the film that captured my imagination. The short film was very well made, with different departments such as direction, screenplay, acting and cinematography all supporting and complementing each other. The real stalwart of the movie (as well as in real-life) is Nick Vujicic, who transforms himself from being ‘a perversion of nature’, ‘a creature given up by God’, etc to an extraordinary circus performer, an overachiever. As his mentor, the owner of Butterfly Circus correctly points out, much of Nick’s predicament and diminished sense of self is of his own making. Admonishing Nick for his passive acceptance of fate, the mentor (played by Eduardo Verà ¡stegui) urges Nick to achieve something like other circus performers. Even when Nick falls down as he tries to cross over the river, the mentor allows him to ‘manage’ on his own. Apathetic and cruel as it might seem, Verastegui knew what he was doing – name ly cultivating self-sufficiency in Nick. Just as Verastegui intended, this attitude leads to a breakthrough event, when Nick falls into the water, and in a desperate attempt to keep afloat, discovers that he can swim. As a teacher of special needs children, I can play this film to my students and inspire them to make maximum use of their lives. In addition to the film, I would also play motivational talks given by Nick Vujicic to my students, for the film is only a representation of his own real struggles. The film has also taught me the role of mentorship in uplifting disabled children. For example, the kind of encouragement that a child is received has profound implications for the way it integrates into the mainstream society. Usually, a child from one of the minority communities has to overcome more challenges. The minority status may be as a result of disability, ethnicity, language, race or

Video game controversy Essay Example for Free

Video game controversy Essay â€Å"In 2008, 298. 2 million video games were sold in the US, totaling $11. 7 billion in revenue. Six of the top ten best-selling video games included violence, with four of the games carrying a Mature rating recommended for persons aged 17 and older. † However, violent video games are becoming a serious issue due to increases in bullying, violence toward women and school shootings. Although, many individuals will claim that video games are just an easy accessibility to express oneself, there have been thousands of researches worldwide hoping to find the relations, threats, and even benefits transferred from violent video games to the gamers. In fact, some of the â€Å"most focused on† studies force to claim that playing violent video games does present a threat to a user’s psychological health which leads the gamer to aggressive(dangerous) behavior, increases social isolation, and should be prevented from purchase by minors. â€Å"Physical aggression† is defined as behavior intended to harm another person physically. Organizations such as the Journal of the American academy of Pediatrics, the American Psychiatric Association, and the American Psychological Association have actually been focusing on video games and the ties they have with physical aggression. The American Psychological Association concludes that adolescents who play video games may become increasingly aggressive over time(Yee 454). Several other studies have come to identify a cause/effect relationship between dangerous aggression and violent video games. Obviously, many gamers are not displaying much physical aggression personally while they are blowing the brains out of the â€Å"bad guys. † However, as many researchers proclaim, â€Å"Exposing children and adolescents to violent visual media increases the likelihood that they will engage in physical aggression against another person†(Anderson 445). With that stated, as the gamer may not be exploiting physical aggression at the time playing the violent video game, that individual has a high risk of absorbing similar aggressive characteristics especially after playing the game repeatedly. Along with an expected increase of physical aggression, many researchers believe that, â€Å"Media violence also produces an emotional desensitization to aggression and violence†(446). A gamer that is newly introduced to the genre of violent video games may become less sensitive or emotionally unresponsive toward violence as exposure to such genre increases and repeated game play occurs. According to James Gee, â€Å"Game players are active problem solvers who do not see mistakes as errors, but as opportunities for improvement. Players search for newer, better solutions to problems and challenges†(451). Besides all of the negative opinions on violent video games and straight from the text, â€Å"A recent Texas AM International study shows that violent games could actually reduce violent tendencies and could be used as a therapy tool for teens and young adults† (Greenberg 456-7). The majority of teens are students, occasionally have emotional stress, or just plainly need to relieve stress and to many the best way to do so is by pulling out the new Grand Theft Auto. Violence portrayed in video games—similar to reality or not—is thought of to â€Å"help children with difficult feelings such as powerlessness and fear of real violence†(Greenberg 456). Similarly, with no direct relationship, cigarette smoking is not a sufficient cause of lung cancer; although it is a cause that is closely related. Physical aggression may be increased with the direct use of violent video games, just as the risk of being diagnosed with lung cancer increases for the individual who smokes a cigarette. However, the list of risk factors in order to develop lung cancer stretches far beyond than just the cigarette; and even the one that does smoke may be in healthy shape for the majority of a lifetime. With that stated, video games are just one of many possible risk factors of physical aggression and may not exactly be supported with sufficient evidence to claim high levels of physical aggression resulted from violent video games. According to Anderson, â€Å"There are many causal risk factors involved in the development of a person who frequently behaves in an aggressive or violent manner. There are biological factors, family factors, neighborhood factors, and so on†(446). But regardless of how many other risk factors are present in a youth’s life, playing a lot of violent games is likely to increase the frequency and the aggression, both in the short term and over time as the youth grows up(Anderson 446). No matter if the physical aggression in a gamer of the violent genre is extreme or does not seem to pose a serious threat, the physical aggression does exist and can risk increasing as the violent games are being played more. Repeated consumption of violent video games â€Å"create more positive attitudes, beliefs, and expectations regarding aggressive solutions to interpersonal problems†(446). Youth are becoming to conclude that physical aggression is acceptable, and rather normal. Well over 100 experienced researchers, scientists, and scholars worldwide follow a statement which says: â€Å"Overall, the research data conclude that exposure to violent video games causes an increase in the likelihood of aggressive behavior. The effects are both immediate and long term. Violent video games have also been found to increase aggressive thinking, aggressive feelings, physiological desensitization to violence, and to decrease pro-social behavior. † Researchers and critics have expressed concerns about appropriate socialization and even addiction of young people who spend too much time alone, staring at a screen. Playing violent video games does present a threat to a user’s psychological health in which it increases social isolation. Before video games became such entertainment, more physical activity and social interactions with other individuals was a priority in search for easy entertainment. According to the website, Buzzle, referring to socialization and video games, â€Å"Social isolation can be an immediate consequence of continuous and ceaseless gaming. People, especially children, tend to spend lesser time with their friends and others because they want to get back home and continue playing. This makes them aloof from others and so in the long-run lack abilities of social communication and develop a kind of anthropophobiafear of human company†(Web). Children and teens may also come across confusion about reality and fiction. Being addicted anything, including violent video games, can place a burden on one’s social life. The ability for frequent playing gamers to witness certain realities of the world become limited and the amount of individuals the gamer interacts with eventually decreases; which leads to social isolation. Almost 60 percent of frequent gamers play with friends. Thirty-three perscent play with siblings and 25 percent play with spouses or parents. Even games designed for single players are often played socially(Jenkins 451). With percentages fairly medium, social isolation does not look as if it is as big a factor as expected. Although, gamers are not always socially interacting, social bonding makes up a major part of the controlled play. Many games, such as Call of Duty, allow access to a headset which allows individuals to socially interact with one another while playing the game. Also, about 40% of all user time on Facebook is spent playing social games, where Facebook is designed to socially interact with friends and family on a social networking site. According to Jane McGonigal, â€Å"Games make it easy to build stronger social bonds with our friends and family. Studies show that we like and trust someone better after we play a game with them—even if they beat us†(465). Even though Facebook is considered a social networking site, playing social games on the site does not exactly relate to the correct form of social interaction that is necessary to be correct. Also, just because you can talk through headset and socially interact; you are not exactly familiar with the individual speaking to you. In result, certain fears may lead to transformation in social awkwardness due to decrease of face to face contact in replace with a headset and other gamers sitting in front of their screen. Games may make it easy to build stronger social bonds, however, adding an intense amount of violence can result in different mood changes in gamers due to personal opinions on acts performed by other gamers. With a change in social behavior; friendships, family members, peers and other individuals may diagnose a problem with the gamer and consider violent video games to be a direct result of social isolation. Is it considered constitutional if an American citizen gets limited rights under the First Amendment? Playing violent video games does present a threat to a user’s psychological health and should be prevented from purchase by minors. However, Supreme Court judicial and other government officials have to decide if prohibition of violent video games to minors is interfering with the individual’s right to the First Amendment—which basically allows American citizens to have freedom of specific categories. The harmful effects on minors from playing violent video games are documented and seriously contested(Yee 454). States such as California are already attempting to make laws in which sell of violent video games to minors is prohibited just to protect children from the harmful effects of excessively violent video games. Prohibiting the sale of violent video games to minors will assist in preventing unnecessary risk factors resulted from video games. As teens short of the required age cannot watch ‘R-rated’ movies, they should not be granted the ability to control a version of realism that is similar to â€Å"real-life† on a screen in front of your face. Within the First Amendment rights are rights of speech, press, and political freedom. â€Å"To strip First Amendment free speech protection from video games that ‘lack serious literary, artistic, political, or scientific value for minors,†(Greenberg 455) is just absurd and objecting against one’s constitutional rights. Besides preventing the sale of violent video games to minors just going against the First Amendment, some stores may stop carrying Mature-rated games. Game publishers might be afraid to finance them. Developers would not know how to avoid triggering censorship because even the creator of such laws do not seem to know(456). Government bureaucrats are not fully equipped to â€Å"divine the artistic value that a video game has for a 17-year old. † Excitingly, many researchers believe that parents should gain more authority in the types of games or media the child absorbs or chooses to interact with. Instead of the gaming industry being responsible for the outcomes of critic reviews, and research studies; the children’s parents should take much more responsibility on anything absorbed, taught, or knowledge received by child. The people allowed to limit minor’s free speech rights are his parents or guardian(s)(456). As stated by Yee, â€Å"I am hopeful that a majority of justices will agree that parents—not retailers or game makers—should determine which games are appropriate for kids†(454). As Greenberg proclaims at the end of his passage, â€Å"Even when video games contain violence, and even when the players are minors whose parents let them play games with violence, picking up that game controller is a form of expression, and it should be free†(457). â€Å"It makes no sense to bar children from buying a picture of a naked woman but to allow them to buy video games that portray gratuitous torture†(Yee 454). There are several laws or rules that prevent us from reaching desired expectations due to physical reactions, age, and maturity level, to say the least. If a minor is prohibited from the sale of pornography due to social morals and personal ineligibilities, then one should receive tougher access to the available consumption of violent video games. The prevention to contribute those games to minors is a hopeful act to will not only ensure that parents make such decisions, but will help protect our children in the years to come. Yee claims, â€Å"That since the government can ‘prohibit the sale of alcohol, tobacco, firearms, driver’s licenses and pornography to minors’ then ‘that same reasoning applies in the foundation and enactment’ of his law restricting video games. There is a certain age until finally eligible to legally purchase weapons, alcoholic beverages, tobacco, sexual accessories, tattoos, and the list goes on. The more progressed and difficult the violent video games are becoming are being critiqued extremely precise, and actually portraying very similar to realistic visuals. With prevention of sales to minors, unnecessary confusion between psychological health in minors and violent video games will be limited and nearly eliminated; leaving open window of individuals that gain access to violent video games with prohibition to sell to minors. Do violent video games present a threat to the gamers’ psychological health? Although several studies have left many conclusions unanswered; hundreds of researchers, scientists and scholars have worked together and individually to allow the correct information behind the true relationship of violent video games and the gamers’ psychological health. Playing violent video games does present a threat to user’s psychological health in which it leads to aggressive behavior, increases social isolation, and should be prevented from purchase by minors. Works Cited Anderson, Craig A. â€Å"Violent Video Games and Other Media Violence. † Writing Arguments: a rhetoric with readings. Ed. Lauren A. Finn. New Jersey: Saddle River, 2012. 445-6. Print. D’Silva, Roy. â€Å"Negative Effects of Video Games. † Buzzle. 10 Oct 2012. Web. 2 Mar 2013. Greenberg, Daniel. â€Å"Why the Supreme Court Should Rule that Violent Video Games are Free Speech. † Writing Arguments: a rhetoric with readings. Ed. Lauren A. Finn. New Jersey: Saddle River, 2012. 454-7. Print. Jenkins, Henry. â€Å"Reality Bytes: Eight Myths about video Games Debunked. † Writing Arguments: a rhetoric with readings. Ed. Lauren A. Finn. New Jersey: Saddle River, 2012. 449-452. Print. McGonigal, Jane. â€Å"Be a Gamer, Save the World. † Writing Arguments. : a rhetoric with readings. Ed. Lauren A. Finn. New Jersey: Saddle River, 2012. 464-6. Print. ProCon. org. Do violent video games contribute to youth violence? ProCon. org. 29 Mar 2011. Web. 2 Mar 2013. Yee, Leland Y. â€Å"Parents Should be able to Control What Kids Watch. † Writing Arguments: a rhetoric with readings. Ed. Lauren A. Finn. New Jersey: Saddle River, 2012. 453-4. Print.