Microeconomics (from Greek prefix mikro- meaning 'small' + economics) is a branch of economics that studies the behaviour of individuals and firms in making decisions regarding the allocation of scarce resources and the interactions among these individuals and firms.[1][2][3]
The Pareto principle states that, for many events, roughly 80% of the effects come from 20% of. This statistic has been used to support both stop-and-frisk policies and broken. Mathematical notes[edit]. Software Engineering: A Practitioner's Approach (7th ed.). 'Introduction to Risk-based Decision-Making' (PDF). One goal of microeconomics is to analyze the market mechanisms that establish relative prices among goods and services and allocate limited resources among alternative uses. Microeconomics shows conditions under which free markets lead to desirable allocations. It also analyzes market failure, where markets fail to produce efficient results. Microeconomics stands in contrast to macroeconomics, which involves 'the sum total of economic activity, dealing with the issues of growth, inflation, and unemployment and with national policies relating to these issues'.[2] Microeconomics also deals with the effects of economic policies (such as changing taxation levels) on microeconomic behavior and thus on the aforementioned aspects of the economy.[4] Particularly in the wake of the Lucas critique, much of modern macroeconomic theories has been built upon microfoundationsâi.e. based upon basic assumptions about micro-level behavior.
Assumptions and definitions[edit]Microeconomic theory typically begins with the study of a single rational and utility maximizing individual. To economists, rationality means an individual possesses stable preferences that are both complete and transitive. The technical assumption that preference relations are continuous is needed to ensure the existence of a utility function. Although microeconomic theory can continue without this assumption, it would make comparative statics impossible since there is no guarantee that the resulting utility function would be differentiable. Microeconomic theory progresses by defining a competitive budget set which is a subset of the consumption set. It is at this point that economists make the technical assumption that preferences are locally non-satiated. Without the assumption of LNS (local non-satiation) there is no 100% guarantee but there would be a rational rise in individual utility. With the necessary tools and assumptions in place the utility maximization problem (UMP) is developed. The utility maximization problem is the heart of consumer theory. The utility maximization problem attempts to explain the action axiom by imposing rationality axioms on consumer preferences and then mathematically modeling and analyzing the consequences. The utility maximization problem serves not only as the mathematical foundation of consumer theory but as a metaphysical explanation of it as well. That is, the utility maximization problem is used by economists to not only explain what or how individuals make choices but why individuals make choices as well. The utility maximization problem is a constrained optimization problem in which an individual seeks to maximize utility subject to a budget constraint. Economists use the extreme value theorem to guarantee that a solution to the utility maximization problem exists. That is, since the budget constraint is both bounded and closed, a solution to the utility maximization problem exists. Economists call the solution to the utility maximization problem a Walrasian demand function or correspondence. The utility maximization problem has so far been developed by taking consumer tastes (i.e. consumer utility) as the primitive. However, an alternative way to develop microeconomic theory is by taking consumer choice as the primitive. This model of microeconomic theory is referred to as revealed preference theory.
The supply and demand model describes how prices vary as a result of a balance between product availability at each price (supply) and the desires of those with purchasing power at each price (demand). The graph depicts a right-shift in demand from D1 to D2 along with the consequent increase in price and quantity required to reach a new market-clearing equilibrium point on the supply curve (S).
The theory of supply and demand usually assumes that markets are perfectly competitive. This implies that there are many buyers and sellers in the market and none of them have the capacity to significantly influence prices of goods and services. In many real-life transactions, the assumption fails because some individual buyers or sellers have the ability to influence prices. Quite often, a sophisticated analysis is required to understand the demand-supply equation of a good model. However, the theory works well in situations meeting these assumptions. Mainstream economics does not assume a priori that markets are preferable to other forms of social organization. In fact, much analysis is devoted to cases where market failures lead to resource allocation that is suboptimal and creates deadweight loss. A classic example of suboptimal resource allocation is that of a public good. In such cases, economists may attempt to find policies that avoid waste, either directly by government control, indirectly by regulation that induces market participants to act in a manner consistent with optimal welfare, or by creating 'missing markets' to enable efficient trading where none had previously existed. This is studied in the field of collective action and public choice theory. 'Optimal welfare' usually takes on a Paretian norm, which is a mathematical application of the KaldorâHicks method. This can diverge from the Utilitarian goal of maximizing utility because it does not consider the distribution of goods between people. Market failure in positive economics (microeconomics) is limited in implications without mixing the belief of the economist and their theory. The demand for various commodities by individuals is generally thought of as the outcome of a utility-maximizing process, with each individual trying to maximize their own utility under a budget constraint and a given consumption set. Introduction To Mathematical Statistics 7th Edition Pdf Free Download For Windows 7Basic microeconomic concepts[edit]The study of microeconomics involves several 'key' areas: Demand, supply, and equilibrium[edit]Supply and demand is an economic model of price determination in a perfectly competitive market. It concludes that in a perfectly competitive market with no externalities, per unit taxes, or price controls, the unit price for a particular good is the price at which the quantity demanded by consumers equals the quantity supplied by producers. This price results in a stable economic equilibrium. Measurement of elasticities[edit]Elasticity is the measurement of how responsive an economic variable is to a change in another variable. Elasticity can be quantified as the ratio of the change in one variable to the change in another variable, when the later variable has a causal influence on the former. It is a tool for measuring the responsiveness of a variable, or of the function that determines it, to changes in causative variables in unitless ways. Frequently used elasticities include price elasticity of demand, price elasticity of supply, income elasticity of demand, elasticity of substitution or constant elasticity of substitution between factors of production and elasticity of intertemporal substitution. Consumer demand theory[edit]Consumer demand theory relates preferences for the consumption of both goods and services to the consumption expenditures; ultimately, this relationship between preferences and consumption expenditures is used to relate preferences to consumer demand curves. The link between personal preferences, consumption and the demand curve is one of the most closely studied relations in economics. It is a way of analyzing how consumers may achieve equilibrium between preferences and expenditures by maximizing utility subject to consumer budget constraints. Theory of production[edit]Production theory is the study of production, or the economic process of converting inputs into outputs.[5]Production uses resources to create a good or service that is suitable for use, gift-giving in a gift economy, or exchange in a market economy. This can include manufacturing, storing, shipping, and packaging. Some economists define production broadly as all economic activity other than consumption. They see every commercial activity other than the final purchase as some form of production. Costs of production[edit]The cost-of-production theory of value states that the price of an object or condition is determined by the sum of the cost of the resources that went into making it. The cost can comprise any of the factors of production: labour, capital, land, entrepreneur. Technology can be viewed either as a form of fixed capital (e.g.plant) or circulating capital (e.g.intermediate goods). In the mathematical model for the cost of production, the short-run total cost is equal to fixed cost plus total variable cost. The fixed cost refers to the cost that is incurred regardless of how much the firm produces. The variable cost is a function of the quantity of an object being produced. Opportunity cost[edit]The economic idea of opportunity cost is closely related to the idea of time constraints. You can do only one thing at a time, which means that, inevitably, youâre always giving up other things. The opportunity cost of any activity is the value of the next-best alternative thing you may have done instead. Opportunity cost depends only on the value of the next-best alternative. It doesnât matter whether you have 5 alternatives or 5,000. Opportunity costs can tell you when not to do something as well as when to do something. For example, you may like waffles, but you like chocolate even more. If someone offers you only waffles, youâre going to take it. But if youâre offered waffles or chocolate, youâre going to take the chocolate. The opportunity cost of eating waffles is sacrificing the chance to eat chocolate. Because the cost of not eating the chocolate is higher than the benefits of eating the waffles, it makes no sense to choose waffles. Of course, if you choose chocolate, youâre still faced with the opportunity cost of giving up having waffles. But youâre willing to do that because the waffle's opportunity cost is lower than the benefits of the chocolate. Opportunity costs are unavoidable constraints on behaviour because you have to decide whatâs best and give up the next-best alternative. Market structure[edit]The market structure can have several types of interacting market systems. Different forms of markets are a feature of capitalism and market socialism, with advocates of state socialism often criticizing markets and aiming to substitute or replace markets with varying degrees of government-directed economic planning. Competition acts as a regulatory mechanism for market systems, with government providing regulations where the market cannot be expected to regulate itself. One example of this is with regards to building codes, which if absent in a purely competition regulated market system, might result in several horrific injuries or deaths to be required before companies would begin improving structural safety, as consumers may at first not be as concerned or aware of safety issues to begin putting pressure on companies to provide them, and companies would be motivated not to provide proper safety features due to how it would cut into their profits. Some examples of markets:
Perfect competition[edit]Perfect competition is a situation in which numerous small firms producing identical products compete against each other in a given industry. Perfect competition leads to firms producing the socially optimal output level at the minimum possible cost per unit. Firms in perfect competition are 'price takers' (they do not have enough market power to profitably increase the price of their goods or services). A good example would be that of digital marketplaces, such as eBay, on which many different sellers sell similar products to many different buyers. Consumers in a perfect competitive market have perfect knowledge about the products that are being sold in this market. Imperfect competition[edit]In economic theory, imperfect competition is a type of market structure showing some but not all features of competitive markets. Monopolistic competition[edit]Monopolistic competition is a situation in which many firms with slightly different products compete. Production costs are above what may be achieved by perfectly competitive firms, but society benefits from the product differentiation. Examples of industries with market structures similar to monopolistic competition include restaurants, cereal, clothing, shoes, and service industries in large cities. Monopoly[edit]A monopoly is a market structure in which a market or industry is dominated by a single supplier of a particular good or service. Because monopolies have no competition they tend to sell goods and services at a higher price and produce below the socially optimal output level. However, not all monopolies are a bad thing, especially in industries where multiple firms would result in more costs than benefits (i.e. natural monopolies).[6][citation needed]
Oligopoly[edit]An oligopoly is a market structure in which a market or industry is dominated by a small number of firms (oligopolists). Oligopolies can create the incentive for firms to engage in collusion and form cartels that reduce competition leading to higher prices for consumers and less overall market output.[7] Alternatively, oligopolies can be fiercely competitive and engage in flamboyant advertising campaigns.[citation needed]
Monopsony[edit]A monopsony is a market where there is only one buyer and many sellers. Oligopsony[edit]An oligopsony is a market where there are a few buyers and many sellers. Game theory[edit]Game theory is a major method used in mathematical economics and business for modeling competing behaviors of interacting agents. The term 'game' here implies the study of any strategic interaction between people. Applications include a wide array of economic phenomena and approaches, such as auctions, bargaining, mergers & acquisitions pricing, fair division, duopolies, oligopolies, social network formation, agent-based computational economics, general equilibrium, mechanism design, and voting systems, and across such broad areas as experimental economics, behavioral economics, information economics, industrial organization, and political economy. Labor economics[edit]Labor economics seeks to understand the functioning and dynamics of the markets for wage labor. Labor markets function through the interaction of workers and employers. Labor economics looks at the suppliers of labor services (workers), the demands of labor services (employers), and attempts to understand the resulting pattern of wages, employment, and income. In economics, labor is a measure of the work done by human beings. It is conventionally contrasted with such other factors of production as land and capital. There are theories which have developed a concept called human capital (referring to the skills that workers possess, not necessarily their actual work), although there are also counter posing macro-economic system theories that think human capital is a contradiction in terms. Welfare economics[edit]Welfare economics is a branch of economics that uses microeconomics techniques to evaluate well-being from allocation of productive factors as to desirability and economic efficiency within an economy, often relative to competitive general equilibrium.[9] It analyzes social welfare, however measured, in terms of economic activities of the individuals that compose the theoretical society considered. Accordingly, individuals, with associated economic activities, are the basic units for aggregating to social welfare, whether of a group, a community, or a society, and there is no 'social welfare' apart from the 'welfare' associated with its individual units. Economics of information[edit]Information economics or the economics of information is a branch of microeconomic theory that studies how information and information systems affect an economy and economic decisions. Information has special characteristics. It is easy to create but hard to trust. It is easy to spread but hard to control. It influences many decisions. These special characteristics (as compared with other types of goods) complicate many standard economic theories.[10] The economics of information has recently become of great interest to many - possibly due to the rise of information based companies inside the technology industry.[11] From a game theory approach, we can loosen the usual constraints that agents have complete information to further examine the consequences of having incomplete information. This gives rise to many results which are applicable to real life situations. For example, if one does loosen this assumption, then it is possible to scrutinize the actions of agents in situations of uncertainty. It is also possible to more fully understand the impacts - both positive and negative - of agents seeking out or acquiring information.[11] Applied[edit]
United States Capitol Building: meeting place of the United States Congress, where many tax laws are passed, which directly impact economic welfare. This is studied in the subject of public economics.
Applied microeconomics includes a range of specialized areas of study, many of which draw on methods from other fields. Industrial organization examines topics such as the entry and exit of firms, innovation, and the role of trademarks. Labor economics examines wages, employment, and labor market dynamics. Financial economics examines topics such as the structure of optimal portfolios, the rate of return to capital, econometric analysis of security returns, and corporate financial behavior. Public economics examines the design of government tax and expenditure policies and economic effects of these policies (e.g., social insurance programs). Political economy examines the role of political institutions in determining policy outcomes. Health economics examines the organization of health care systems, including the role of the health care workforce and health insurance programs. Education economics examines the organization of education provision and its implication for efficiency and equity, including the effects of education on productivity. Urban economics, which examines the challenges faced by cities, such as sprawl, air and water pollution, traffic congestion, and poverty, draws on the fields of urban geography and sociology. Law and economics applies microeconomic principles to the selection and enforcement of competing legal regimes and their relative efficiencies. Economic history examines the evolution of the economy and economic institutions, using methods and techniques from the fields of economics, history, geography, sociology, psychology, and political science. History[edit]The difference between microeconomics and macroeconomics was introduced in 1933 by the Norwegian economist Ragnar Frisch (Nobel Prize 1969). See also[edit]![]() References[edit]
Further reading[edit]
External links[edit]
Retrieved from 'https://en.wikipedia.org/w/index.php?title=Microeconomics&oldid=899810463'
The Pareto principle (also known as the 80/20 rule, the law of the vital few, or the principle of factor sparsity)[1][2] states that, for many events, roughly 80% of the effects come from 20% of the causes.[3]Management consultantJoseph M. Juran suggested the principle and named it after Italian economistVilfredo Pareto, who noted the 80/20 connection while at the University of Lausanne in 1896, as published in his first work, Cours d'économie politique. Essentially, Pareto showed that approximately 80% of the land in Italy was owned by 20% of the population. It is an axiom of business management that '80% of sales come from 20% of clients'.[4] Mathematically, the 80/20 rule is roughly followed by a power law distribution (also known as a Pareto distribution) for a particular set of parameters, and many natural phenomena have been shown empirically to exhibit such a distribution.[5] The Pareto principle is only tangentially related to Pareto efficiency. Pareto developed both concepts in the context of the distribution of income and wealth among the population.
In economics[edit]The original observation was in connection with population and wealth. Pareto noticed that approximately 80% of Italy's land was owned by 20% of the population.[6] He then carried out surveys on a variety of other countries and found to his surprise that a similar distribution applied. A chart that gave the inequality a very visible and comprehensible form, the so-called 'champagne glass' effect,[7] was contained in the 1992 United Nations Development Program Report, which showed that distribution of global income is very uneven, with the richest 20% of the world's population controlling 82.7% of the world's income.[8]
The Pareto principle also could be seen as applying to taxation. In the US, the top 20% of earners have paid roughly 80-90% of Federal income taxes in 2000 and 2006,[10] and again in 2018.[11] However, it is important to note that while there have been associations of such with meritocracy, the principle should not be confused with farther reaching implications. As Alessandro Pluchino at the University of Catania in Italy points out, other attributes do not necessarily correlate. Using talent as an example, he and other researchers state, âThe maximum success never coincides with the maximum talent, and vice-versa.â, and that such factors are the result of chance.[12] In computing[edit]In computer science the Pareto principle can be applied to optimization efforts.[13] For example, Microsoft noted that by fixing the top 20% of the most-reported bugs, 80% of the related errors and crashes in a given system would be eliminated.[14]Lowell Arthur expressed that '20 percent of the code has 80 percent of the errors. Find them, fix them!'[15] It was also discovered that in general the 80% of a certain piece of software can be written in 20% of the total allocated time. Conversely, the hardest 20% of the code takes 80% of the time. This factor is usually a part of COCOMO estimating for software coding. In sports[edit]It has been inferred the Pareto principle applies to athletic training, where roughly 20% of the exercises and habits have 80% of the impact and the trainee should not focus so much on a varied training.[16] This does not necessarily mean that having a healthy diet or going to the gym are not important, but they are not as significant as the key activities. It is also important to note this 80/20 rule has yet to be scientifically tested in controlled studies with regards to athletic training. In baseball, the Pareto principle has been perceived in Wins Above Replacement (an attempt to combine multiple statistics to determine a player's overall importance to a team). '15% of all the players last year produced 85% of the total wins with the other 85% of the players creating 15% of the wins. The Pareto Principle holds up pretty soundly when it is applied to baseball..'[17] Occupational health and safety[edit]Occupational health and safety professionals use the Pareto principle to underline the importance of hazard prioritization. Assuming 20% of the hazards account for 80% of the injuries, and by categorizing hazards, safety professionals can target those 20% of the hazards that cause 80% of the injuries or accidents. Alternatively, if hazards are addressed in random order, a safety professional is more likely to fix one of the 80% of hazards that account only for some fraction of the remaining 20% of injuries.[18] Aside from ensuring efficient accident prevention practices, the Pareto principle also ensures hazards are addressed in an economical order as the technique ensures the resources used are best used to prevent the most accidents.[19] Other applications[edit]In engineering control theory, such as for electromechanical energy converters, the 80/20 principle applies to optimization efforts.[13] The law of the few can be also seen in betting, where it is said that with 20% effort you can match the accuracy of 80% of the bettors.[20] In the systems science discipline, Joshua M. Epstein and Robert Axtell created an agent-based simulation model called Sugarscape, from a decentralized modeling approach, based on individual behavior rules defined for each agent in the economy. Wealth distribution and Pareto's 80/20 principle became emergent in their results, which suggests the principle is a collective consequence of these individual rules.[21] The Pareto principle has many applications in quality control.[22][citation needed] It is the basis for the Pareto chart, one of the key tools used in total quality control and Six Sigma techniques. The Pareto principle serves as a baseline for ABC-analysis and XYZ-analysis, widely used in logistics and procurement for the purpose of optimizing stock of goods, as well as costs of keeping and replenishing that stock.[23] In health care in the United States, in one instance 20% of patients have been found to use 80% of health care resources.[24], [25][26] Some cases of super-spreading conform to the 20/80 rule,[27] where approximately 20% of infected individuals are responsible for 80% of transmissions, although super-spreading can still be said to occur when super-spreaders account for a higher or lower percentage of transmissions.[28] In epidemics with super-spreading, the majority of individuals infect relatively few secondary contacts. The Dunedin Study has found 80% of crimes are committed by 20% of criminals.[29] This statistic has been used to support both stop-and-frisk policies and broken windows policing, as catching those criminals committing minor crimes will supposedly net many criminals wanted for (or who would normally commit) larger ones. Introduction To Mathematical Statistics 7th Edition Pdf Free Download For MacMany video rental shops reported in 1988 that 80% of revenue came from 20% of videotapes. A video-chain executive discussed the 'Gone with the Wind syndrome', however, in which every store had to offer classics like Gone with the Wind, Casablanca, or The African Queen to appear to have a large inventory, even if customers very rarely rented them.[30] Mathematical notes[edit]The idea has a rule of thumb application in many places, but it is commonly misused. For example, it is a misuse to state a solution to a problem 'fits the 80/20 rule' just because it fits 80% of the cases; it must also be that the solution requires only 20% of the resources that would be needed to solve all cases. Additionally, it is a misuse of the 80/20 rule to interpret a small number of categories or observations. This is a special case of the wider phenomenon of Pareto distributions. If the Pareto indexα, which is one of the parameters characterizing a Pareto distribution, is chosen as α = log45 â 1.16, then one has 80% of effects coming from 20% of causes. It follows that one also has 80% of that top 80% of effects coming from 20% of that top 20% of causes, and so on. Eighty percent of 80% is 64%; 20% of 20% is 4%, so this implies a '64/4' law; and similarly implies a '51.2/0.8' law. Similarly for the bottom 80% of causes and bottom 20% of effects, the bottom 80% of the bottom 80% only cause 20% of the remaining 20%. This is broadly in line with the world population/wealth table above, where the bottom 60% of the people own 5.5% of the wealth, approximating to a 64/4 connection. The 64/4 correlation also implies a 32% 'fair' area between the 4% and 64%, where the lower 80% of the top 20% (16%) and upper 20% of the bottom 80% (also 16%) relates to the corresponding lower top and upper bottom of effects (32%). This is also broadly in line with the world population table above, where the second 20% control 12% of the wealth, and the bottom of the top 20% (presumably) control 16% of the wealth. The term 80/20 is only a shorthand for the general principle at work. In individual cases, the distribution could just as well be, say, nearer to 80/20 or 70/30. There is no need for the two numbers to add up to the number 100, as they are measures of different things, (e.g., 'number of customers' vs 'amount spent'). However, each case in which they do not add up to 100%, is equivalent to one in which they do. For example, as noted above, the '64/4 law' (in which the two numbers do not add up to 100%) is equivalent to the '80/20 law' (in which they do add up to 100%). Thus, specifying two percentages independently does not lead to a broader class of distributions than what one gets by specifying the larger one and letting the smaller one be its complement relative to 100%. Thus, there is only one degree of freedom in the choice of that parameter. Adding up to 100 leads to a nice symmetry. For example, if 80% of effects come from the top 20% of sources, then the remaining 20% of effects come from the lower 80% of sources. This is called the 'joint ratio', and can be used to measure the degree of imbalance: a joint ratio of 96:4 is very imbalanced, 80:20 is significantly imbalanced (Gini index: 76%), 70:30 is moderately imbalanced (Gini index: 28%), and 55:45 is just slightly imbalanced (Gini index 14%). The Pareto principle is an illustration of a 'power law' relationship, which also occurs in phenomena such as brush fires and earthquakes.[31]Because it is self-similar over a wide range of magnitudes, it produces outcomes completely different from Normal or Gaussian distribution phenomena. This fact explains the frequent breakdowns of sophisticated financial instruments, which are modeled on the assumption that a Gaussian relationship is appropriate to, for example, stock price movements.[32] Equality measures[edit]Gini coefficient and Hoover index[edit]![]() Using the 'A : B' notation (for example, 0.8:0.2) and with A + B = 1, inequality measures like the Gini index (G) and the Hoover index (H) can be computed. In this case both are the same.
Introduction To Mathematical Statistics 7th Edition Pdf Free Download Windows 7Theil index[edit]The Theil index is an entropy measure used to quantify inequalities. The measure is 0 for 50:50 distributions and reaches 1 at a Pareto distribution of 82:18. Higher inequalities yield Theil indices above 1.[33]
See also[edit]References[edit]
Further reading[edit]
External links[edit]Introduction To Mathematical Statistics 7th Edition Pdf Free Download Pdf
Introduction To Mathematical Statistics 7th Edition Pdf Free Download Software
Retrieved from 'https://en.wikipedia.org/w/index.php?title=Pareto_principle&oldid=899421055'
0 Comments
Leave a Reply. |