home - Services
Empirical marketing research. The essence of empirical marketing research Empirical market research

Minervin I.G.

About the article by Chryssokidis J.M., Wong W.

When solving the problem of launching new products on foreign markets, managers can apply different strategies. Variants of this strategy are the simultaneous or sequential release of products in different countries. The authors note that a sequential release strategy is often seen as a means of reducing risk, but in reality this strategy ignores the fact that the success of new products depends to a large extent on the timeliness of their entry into international markets.

The authors note that accelerating new product development processes and shortening product life cycles means that firms must strive for the shortest possible commercialization times in order to prevent competitors from getting ahead of themselves. The speed of market entry is therefore an important factor. competitive advantage leading to higher overall sales and higher profitability of operations. The greater the gain in time compared to competitors in the market, the greater the opportunity for the innovator to assert his position. brand and bring consumers' expectations closer to the parameters of their products. Any delay in launching a product to market means a loss in sales, an increase in the payback period, a decrease in the total profit over the entire product life cycle, and a weakening of competitiveness. All these considerations become even more significant in the case of new products entering multiple, including international, markets.

An important feature of the modern market is time competition (along with price competition and quality competition). The speed and timeliness of development, provision of a product or service and other forms of response to consumer requests are important not only in the domestic but also in the international market. The globalization of markets and competition further enhances this importance. Practice shows that firms are increasingly oriented in product development to the markets of various countries and coordinate the launch of products on a global scale. An example is the "Ford Escort" cars. Some firms choose to simultaneously go to markets globally, such as Microsoft, which released Windows 95 on August 24, 1995. Others opt for a phased release as a product is available, first in one or two markets and then in others. while assuming that a slower, consistent exit is less risky (if something is wrong in one country's market, steps can be taken to resolve the problem before it manifests itself in other markets). There is also a perception that a general delay in simultaneous exits is fraught with more serious consequences for the financial and market position of the firm. The authors emphasize that the magnitude and causes of delays in simultaneous and sequential market entry have not yet been explored, while their knowledge will enhance understanding and solutions to a range of problems.

The authors undertook their research in order to find out the frequency of adherence to planned deadlines and delays with either strategy, to determine the reasons for the delays and the impact they had on the commercial effectiveness of new products.

Within the framework of the study, the authors understand the timeliness of the international launch of new products (MZNP) as the readiness of the product for sale in multiple target markets of the company in accordance with the planned time frame. In other words, timeliness refers to the firm's ability to meet the schedule set by its management. The measure of timeliness is understood as the deviation of the actual timing of the release of new products to target markets from the planned ones. Product readiness for sale means the completion of all necessary technical measures, compliance with the provisions of local laws and government regulations and the fulfillment of other conditions for the commercialization of new products. A product that has not passed all official administrative certification procedures and does not meet all local legal requirements cannot be considered “technically” ready for sale.

Establishing planned or desired timelines for the release of new products to markets depends on the qualifications and experience of managers, their knowledge of the strengths and weaknesses of the organization, and an estimate of the time required to bring the product to the stage of readiness. Despite these potential differences and subjective decision-making factors, it is reasonable to assume that the deviations of the actual timing of products to the markets from the planned ones are associated with the appearance of some reasons that were not foreseen at the planning stage. Significant delays in MZNP indicate significant managerial mistakes and have important consequences for the firm's operations. The main focus of this study is focused on analyzing these causes, the understanding of which is important for managers.

Hypothetical Factors of Timeliness of International New Product Launches

For the purpose of analysis, the authors put forward a number of preliminary hypotheses that could explain the specific results of promoting products to foreign markets in order to confirm or refute them as a result of empirical research. Thus, it is assumed that the timeliness of the MZNP depends on a number of factors related to the external and internal environment of the organization, product strategy and product characteristics. Environmental factors include:

1. Local legislation and state regulation that determine the procedure, legal and technical conditions for the sale of products on the market of a given country. The corresponding procedures, especially in a number of industries (pharmaceuticals, chemistry, food products, telecommunications), can be quite complex and time-consuming (hypothesis 1a);

2. The technological environment influencing in a number of directions and capable of causing the need for additional long-term adaptation of products to the relevant markets, including:

Differences in the level of product and technology standardization and technical standards (hypothesis 1b);

Differences in the requirements for customization1 of products (for example, adaptation to the signals of transmitting telecommunication devices) (hypothesis 1c);

The general level of technological development inherent in the market of a given country, expressed in indicators of technology obsolescence, the rate of renewal of production technology and products (for example, the inability to launch cars with engines equipped with advanced catalytic cleaning technology on the market, in the absence or lack of unleaded fuel in some countries). It is assumed that low rates of technological renewal can serve as a factor in delaying the MZNP (hypothesis 1d).

3. Market and consumer environment, i.e. the degree of heterogeneity, or differentiation of the market - the demands and preferences of consumers and the supply of goods, the nature of purchases and the habits of purchasing goods, the degree of consumer interest in information about the product. It is assumed that the high degree of market heterogeneity requires more effort to adapt the marketing strategy, increasing the likelihood of delays in its implementation (hypothesis 1e).

4. Competitive environment, the degree of intensity of competition, aggressiveness and hostility of the behavior of competitors, perceived by the management of the company as a threat. It is assumed that the more dangerous the competitive environment, the higher the desire of managers to implement the strategy of timeliness of the MZNP, and vice versa (hypothesis 1f).

Internal factors, i.e. Intra-firm functions and processes of development and commercialization of new products that determine the marketing strategy in international markets, the authors include the following:

1. Organization of development and development of new products, the success of which depends on the quality of project planning; technical and market research; marketing activities, including testing, trial sales, coordination of sales channels, advertising and product promotion; the level of technological design; market positioning, etc. Factors such as a clear organization of the processes for developing new products, sound project planning, a high degree of cross-functional integration and coordination of processes for developing and mastering new products should reduce the likelihood of violations of the timeliness of the MZNP (respectively, hypotheses 2a, 2b and 2c) ...

2. Mechanisms for coordinating the firm's activities in multinational target markets. The timeliness of the MSNP depends on the successful application of administrative coordination mechanisms at the international level, including between the firm's headquarters, its subsidiaries and agents. Such mechanisms used by transnational corporations can be divided into two groups:

More formalized and structured mechanisms, including measures to reorganize divisions, centralize or decentralize decision-making, develop procedures, planning, performance evaluation and monitoring of results;

More informal and “subtle” coordination mechanisms, including the use of informal communication channels (business trips, meetings, personnel exchange, personal contacts of managers), horizontal links between departments (creation of temporary teams and target groups, matrix structures, committees), the introduction of a common organizational culture.

It is assumed that the absence or insufficiently intensive use of informal methods impairs communication, complicates coordination and, therefore, increases the likelihood of violations of the timeliness of the MZNP (hypothesis 2d), while more “harsh”, formal methods do not give a tangible effect in terms of the timeliness of the MZNP (hypothesis 2e ).

Next group timeliness factors MZNP is associated with the product strategy of the firm and the characteristics of the products.

A product strategy encompasses factors such as the diversity of target markets (decisions regarding key and secondary markets in which new products will be launched), the scale of technical and marketing resources allocated, and how to enter the market. New product launch for diversified market segments may require significant adaptation, in terms of both technology and marketing, to the markets of different countries, additional investment in design and marketing, and even organizational restructuring, reorientation of the entire business and radical changes in management practices. Marketing resources include sales personnel, training personnel for technical and sales personnel, after-sales service personnel, appropriate equipment and funds needed to ensure product promotion, distribution channels, etc. Technical resources also include relevant personnel. , cash and other elements of technical and software of the required quality. It is also important that the project for the development of new products matches the existing marketing and technical competencies and resources of the firm.

It follows that the likelihood of deviations from the timeliness of the MZNP increases with an increase in the degree of differentiation of target markets (hypothesis 3a), insufficient marketing resources (hypothesis 3b), and insufficient technical resources (hypothesis 3c).

Factors related to the characteristics of products are mainly expressed, firstly, in the influence they have on the technology of product application and the set of marketing methods, and, secondly, in the degree of excellence, the comparative advantages of the products. The qualitative characteristic, the degree of perfection of the product is considered by many researchers as the most important factor in its successful promotion on the market. In addition, the conditions and advantages of using and maintaining a product in the perception of a potential consumer are important. Thus, market acceptance of innovative products slows down significantly if potential consumers consider the innovation incompatible with the prevailing type of consumer behavior or too difficult to master. Hence, two assumptions are made: the high quality of the new product that meets the needs of the target markets reduces the likelihood of violations of the timeliness of the MZNP (hypothesis 3d); the mismatch of the operating and maintenance conditions of the product with the target markets increases the likelihood of non-compliance with the planned MZNP dates (hypothesis 3e).

Finally, another aspect of the analysis is to investigate the relationship between the timeliness of the RPM and the market success of a product. The authors proceed from the presence of a positive relationship (hypothesis 4), since the delay in MZNP indicates certain management mistakes that could have a certain impact on the commercial efficiency of the new product and on the entire activity of the company.

Research methodology and results

The research was carried out in several stages. At the first stage, a preliminary survey of the heads of six TNCs was carried out, which made it possible to work out the methods of detailed questioning. Then a wide survey of other firms was carried out. The main source of information was extensive interviews with marketing managers responsible for international planning, product development and commercialization, as well as with heads of central product development, production and sales.

A total of 30 case studies of new product launches carried out by large TNCs with European headquarters located in the UK were studied. Among them there are 11 companies in North America, 7 in Great Britain, 2 joint British-American, 9 Japanese and one Hong Kong. The study selected mainly high-tech and largely internationalized industries with a high rate of technological change and product innovations (telecommunications, electronics and computer technology, photographic and measuring equipment), in which firms are usually forced to introduce new products on a tight schedule. multiple markets. For reasons of data availability, the study was mainly limited to product launches in Western European markets.

Of the surveyed examples, 60% represent updated and modernized types of products and 40% - fundamentally new types. 19 cases (63% of the sample) represent mass production, 11 (37%) - products intended for specific market segments. In 15 cases (50%) there was a delay, in 15 (50%) - compliance with the timetable for entering the market (p. 25).

The measurement of time parameters was based on two estimates: 1) the difference in time between the actual and planned MZNP dates; 2) an expert assessment of the effectiveness of the MZNP process, expressed in points (from -5 - “very slow”, to +5 - “very fast”). Also, using an expert assessment for a number of indicators (sales volume, profitability, payback period, etc.), the commercial success of launching a particular product on the market was assessed.

All investigated cases were divided into two groups: 1) simultaneous entry, which means a planned entry into all target markets within 1-2 months (8 cases, or 27%); 2) consistent exit when the planned exit time exceeds 2 months (22 cases, or 73%). At the same time, all simultaneous exits turned out to be timely, i.e. were carried out without violations of the planned dates. In the last group, there were cases of delays (15 cases), the periods of which ranged from 1 to 12 months and more. These data, according to the authors, cast doubt on the widespread opinion about the lower riskiness of sequential launches. Managers' penchant for sequential launches, for fear of the difficulties and risks associated with organizing simultaneous launches in many countries, often led to opposite results.

Analysis of the timeliness factors (or reasons for deviations) of the MZNP showed that the most significant are 8 key complex factors (CF), with the first four related to the marketing strategy and product characteristics, and the second four - to the group of internal factors:

КФ1 - sufficiency of marketing resources for specific new products;

КФ2 - sufficiency of technical resources for specific new products;

КФ3 - compliance of operating conditions and maintenance of new products with target markets;

КФ4 - product perfection in terms of its quality characteristics and consumer advantages;

КФ5 - integration in the process of developing and mastering new products;

КФ6 - perfection of processes of development and mastering of new products, including project planning, testing, coordination, promotion, etc .;

КФ7 - preliminary development of the product concept, target purpose, potential market segments, consumers, knowledge of their needs;

KF8 - internal communication and coordination between headquarters, branches and divisions using informal, “soft” methods.

Along with high correlation coefficients between all the factors noted, a particularly strong correlation is observed between marketing resources (CF1), new product development factors (CF5, CF6, CF7) and intra-organizational communication (CF8), as well as between technical resources (CF2) and product development ( CF5, CF6, CF7). The adequacy of the distribution system, distribution channels, training personnel has a direct and indirect effect, influencing the formation of the market and the process of product development, as well as communication and the quality of market information. The sufficiency and quality of engineering and technical resources plays a key role in the development of a new product, determines its parameters and characteristics. In turn, the perfection of the processes of developing a new product is reflected in its high quality characteristics (correlation between KF4 and KF5-KF7).

Thus, at this stage, the correlation analysis showed that the factors KF1-KF8, related to each other, contribute to the timeliness of MZNP. Further, the study examined the scale of differences in the influence of factors that took place in cases with adherence and violation of the planned MZNP terms (by the difference in mean values ​​and standard deviations for each factor between cases with observance and violation of the MZNP terms). This analysis also showed the relationship between all the factors and their positive impact on the timeliness of MHNP. Thus, the validity of hypotheses 2a-2e and 3b-3e was confirmed.

At the same time, the results of the survey revealed only the secondary importance of the differentiation factor of target markets, as well as factors related to the external environment (legal, technological, competitive). According to these results, the main reasons for the violation of the terms of MZNP are associated with the internal organization of the processes of mastering new products, primarily with shortcomings in the field of organizational communications and the provision of quality resources. Solving the problems of coordination, communication, provision of marketing and technical resources makes it as easy as possible to enter a significant number of target markets in individual countries. The best results come from realistic planning and alignment of resources to the needs of specific target markets. Thus, hypotheses 1a-1f as well as 3a were not confirmed.

Calculations of the impact of the timeliness of the MZNP, measured in months, on the commercial efficiency of new products in terms of sales volume, profitability, consumer preferences, etc. cases of delays. Thus, hypothesis 4 was clearly confirmed. The most significant is the positive correlation between the timeliness parameter of MZNP and indicators of sales volume and profitability of new products, which proves its importance as a key factor. economic efficiency innovative activities in international markets.

The correlation between the timeliness of new product development (NDP) and the timeliness of the NMP was also positive and significant. The average planned RI in 30 cases studied was 14.4 months, and the actual - 21.1 months, which gives an average delay of 6.8 months. At the same time, in 7 cases there were no violations of the planned deadlines, and in 23 there was a delay. The frequency of RNP lag is higher in cases of delay than in cases of timely MZNP. In 14 out of 15 cases of delayed launch of products on the market, there was a delay in the timing of its development, and in 15 timely - only 9. The average duration of delays in RNP is also greater in cases with a delay in MZNP (10 months) than in cases of timely entry to the market (3 , 9 months). At the same time, the correlation between the timeliness of the RNP and financial indicators turned out to be more significant than the correlation between the timeliness of the RNP and these indicators. This, according to the authors, means that, despite the strong positive correlation between the two timeliness parameters, the financial consequences of delaying the MHIP may be larger than the consequences of the delay in the completion of the RIA. The results of the study show that the success of developing and commercializing new products in international markets depends on the ability of managers to maintain the planned RPM schedule in order to maximize the benefits obtained while adhering to the planned RPM schedule. Delaying time to market can negate product development phase advances.

conclusions

Previous research focused on the development and launch of new products on the market has shown the importance of the problem of timeliness. This study seeks to deepen this analysis, addressing the issues associated with a complex and risky decision to promote new products in multinational markets. The main findings for the management process are as follows.

1. The higher frequency of product launch delays with sequential versus simultaneous MSNP serves as a warning against preferring sequential MSNP for risk reduction reasons. International marketing managers should carefully weigh the risks of deferral in the context of the two models for entering multinational markets.

2. The results obtained underscore the importance of intra-organizational factors, marketing strategy and product characteristics. At the same time, the lack of direct influence of differentiation of target markets and environmental factors means that firms are able to meet the planned timeline for new products to enter multinational markets, regardless of the nature and severity of legislation, government regulation, technological, market and competitive conditions. For this purpose, a reasonable distribution and coordination of technical and marketing resources are needed to support the RNP and MZNP processes.

3. A critical factor in timeliness is the ability to provide the required marketing and technical resources of the RNP and MZNP, their compliance with the market entry strategy. Respect for timeliness in all simultaneous launches examined, reflecting the adequacy and synergy of resources, suggests that it would be foolish to hope for a sequential launch as a means of compensating for resource scarcity. In this regard, managers are also encouraged to evaluate decisions on RNP-MZNP resources, taking into account their impact on timeliness.

4. Adequacy of resources is only one of the conditions for a successful MHNP. It should be complemented by effective organization of the RNP processes. The study confirmed that the timeliness and effectiveness of the RNP significantly affect the timeliness of entering multinational markets. The role of intensive communication and coordination of the activities of the headquarters and the international divisions of the firm is clearly visible, which is best achieved through the use of “soft” integration mechanisms. Managers need to realize the benefits of developing such “soft” systems for effective communication between units in different countries and the teams involved in the RNP. This will ensure early determination of technical and marketing targets for new products, acceleration of product development processes, and rational use of resources. Intensive communications help to create the necessary infrastructure (sales force and intermediaries) in the respective countries, allow managers in these countries to gain knowledge about the product, the technologies embodied in it, about its potential users, about sales methods, complete all the necessary marketing operations - from the preparation of sales channels before advertising. In short, they facilitate cross-country transfer of competence for the commercialization of a new product.

5. The study confirmed the link between the timeliness of the SPM and the overall success of new products, showing that sales and margins suffer as a result of delays. In addition, although the temporal efficiency of the RNP invariantly accompanies the effectiveness of the MWNP, there is sufficient reason to assert that the deterioration of financial indicators is more associated with violations of the MWNP schedule than with the RNP process. It follows that managers should consider RNP-MZNP as a single process. To avoid new product failures due to delays, the objectives, strategies and resources of the RNP and MHNP must be assessed together. In addition, high-tech firms must strive to penetrate multinational markets through timely global market entry in order to maximize the potential of new products.

Bibliography

For the preparation of this work were used materials from the site marketing.spb.ru/

Send a request with the indication of the topic right now to find out about the possibility of obtaining a consultation.

Ministry of Education and Science of the Republic of Kazakhstan

International Academy of Business

FACULTY OF ECONOMICS, MANAGEMENT AND ENTREPRENEURSHIP

DEPARTMENT "FINANCE AND AUDIT"

COURSE WORK

in the discipline "Financial Management"

Empirical studies of the CAPM model

3rd year student F - 0706 day department group

Chernousova Alexandra Pavlovna

Supervisor:

Candidate of Economic Sciences, Associate Professor

Aitekenova R.K.

ALMATY 2010

Plan

Introduction

Chapter 1. Concept, essence and goals of the CAPM model

1.1 Concept and essence of the CAPM model

1.2 The CAPM Calculation Process

Chapter 2. Possibility of using variants of the CAPM model

2.1 Two-factor CAPM in Black's version

2.2 The essence of the D-CAPM model

Chapter 3. Empirical studies of the applicability of the CAPM model in emerging markets

3.1 Criticism of CAPM and alternative risk measures

3.2 Overview of Empirical Research on Risk-Rewards in Emerging Markets

Conclusion

List of used literature

Application

Introduction

To determine the relevance of this topic, it is necessary to determine what is and what are the goals of empirical research.

Empirical research is scientific factual research.

Any scientific research begins with the collection, systematization and generalization of facts. The concept of "fact" has the following basic meanings:

1) Some fragment of reality, objective events, results related either to objective reality ("facts of reality"), or to the sphere of consciousness and cognition ("facts of consciousness").

2) Knowledge of any event, phenomenon, the reliability of which has been proven, i.e. synonymous with truth.

3) A sentence fixing empirical knowledge, i.e. obtained in the course of observations and experiments.

The internal structure of the empirical level is formed by at least two sublevels:

a) direct observations and experiments, the result of which is the observation data;

b) cognitive procedures through which the transition from observation data to empirical dependencies and facts is carried out.

The activity nature of empirical research at the level of observation is most clearly manifested in situations where observation is carried out in the course of a real experiment. Traditionally, experiment is opposed to observation outside the experiment. Note that the core of empirical research is experiment - testing of the studied phenomena under controlled and controlled conditions. The difference between experimentation and observation is that the experimental conditions are controlled, and in observation, the processes are left to the natural course of events. Without denying the specificity of these two types of cognitive activity, one should pay attention to their common generic characteristics.

For this, it is advisable first to consider in more detail what is the peculiarity of experimental research as a practical activity. Experimental activity is a specific form of natural interaction, and fragments of nature interacting in an experiment always appear as objects with functionally distinguished properties.

Thus, the main purpose of this course work can be considered experiments on the application of the concept of "risk - return" and determining its feasibility in connection with changes in country risks and markets.

At the moment, the concept of "risk-return" is key in corporate finance, as it allows you to quantify the investment and credit risk of the owners of the company's capital in terms of profitability and build effective investment and financial decisions taking into account the resulting assessment. Until now, disputes about the correctness of risk assessment methods and the construction of a model for linking the assumed risk with the return required by investors that are adequate to external conditions do not subside.

Chapter 1. Concept, essence and goals of the model CAPM

1.1 Concept and essence of the model CAPM

Capital Asset Pricing Model (CAPM) is a model for assessing the profitability of financial assets serves as a theoretical basis for a number of different financial technologies for managing profitability and risk, used in long-term and medium-term investment in stocks.

The long-term valuation model or the capital cost model was developed by Harry Markowitz in the 50s.

The CAPM looks at the performance of a stock based on the behavior of the market as a whole. Another initial CAPM assumption is that investors make decisions based on only two factors: expected return and risk.

The meaning of this model is to demonstrate the close relationship between the rate of return and the risk of a financial instrument.

It is known that the greater the risk, the greater the return. Therefore, if we know the potential risk of a security, we can predict the rate of return. Conversely, if we know the return, then we can calculate the risk. All calculations of this kind in relation to profitability and risk are carried out using a long-term asset valuation model.

According to the model, the risk associated with investments in any risky financial instrument can be divided into two types: systematic and non-systematic.

Systematic risk arises from general market and economic changes that affect all investment instruments and are not unique to a particular asset.

Non-systematic risk is associated with a specific issuing company.

Systematic risk cannot be mitigated, but the impact of the market on the return on financial assets can be measured. As a measure of systematic risk, CAPM uses the β (beta) indicator, which characterizes the sensitivity of a financial asset to changes in market returns. Knowing the β index of the asset, it is possible to quantify the amount of risk associated with price changes in the entire market as a whole. The higher the β value of a share, the more its price grows with the general growth of the market, but vice versa - the shares of a company with large positive β fall more strongly when the market as a whole falls.

Unsystematic risk can be mitigated by compiling a diversified portfolio from a sufficiently large number of assets or even from a small number of anti-correlated assets.

Because any stock has its own degree of risk, this risk must be covered with profitability in order for the instrument to remain attractive. According to the long-term asset valuation model, the rate of return of any financial instrument consists of two parts:

1.risk-free income

2. premium income

In other words, any return on a stock includes a risk-free return (often based on government bond rates) and a risk return that (ideally) matches the risk level of the security. If the indicators of profitability exceed the indicators of risk, then the instrument brings more profit than it should be according to its degree of risk. Conversely, if the risk indicators turned out to be higher than the profitability, then we do not need such a tool.

1.2 Model calculation process CAPM

The relationship between risk and return according to the long-term asset valuation model is described as follows:

D = DB / p + β (Dr-DB / r), where

D is the expected rate of return

DB / R - risk-free income

Dr - the profitability of the market as a whole

Β - special coefficient beta

Risk-free income is that part of the income that is included in all investment instruments. Risk-free income is usually measured at government bond rates. those with little or no risk. In the west, the risk-free income is about 4-5%, while in our country it is 7-10%.

The total market return is the rate of return of that market's index. In Kazakhstan, it is an indicator of the KASE stock market.

Beta is a special coefficient that measures the riskiness of an instrument. While the previous elements of the formula are simple, straightforward, and easy to find, β is not so easy to find; free financial services are not provided by β companies.

The regression coefficient β serves as a quantitative measure of systematic risk that does not lend itself to diversification. A security with a β-coefficient of 1 mimics the behavior of the market as a whole. If the value of the coefficient is higher than 1, the reaction of the security outstrips the change in the market both in one direction and in the other direction. The systematic risk of such a financial asset is above average. Assets with β-coefficients below 1 (but above 0) are less risky.

The concept of β-ratios forms the basis of the Capital Assets Pricing Model (CAPM). Using this indicator, the value of the risk premium required by investors for investments with systematic risk above average can be calculated.

Beta coefficient - the angle of inclination of a straight line from a linear equation of the type y = kx + b = β · (Dr-DB / r) + DB / r. This straight line is a straight regression line for two datasets: index returns and stocks. A graphical display of the relationship of these arrays will give a certain aggregate, and the regression line will give us a formula and show us the dependence of the correlation on the scatter of points on the graph.

We will take the formula y = kx + b as a basis. In this formula, we will replace k with coefficient β, here it is equivalent to risk.

We get y = β x + b. For calculations, we will take approximate indicators of the risk-free rate of return of Corporation X and the yield of the KASE index for the period from April 15, 2007 to April 15, 2008.

Calculations, to simplify operations, were carried out in the MSExcel program. The data table is presented in the Appendix.

Chart 1. Image of the beta coefficient


Thus, it can be seen from the graph that the beta coefficient is 0.503, therefore, the yield of the Corporation X share is growing more slowly. Than the profitability of the market on which it is quoted.

Calculation of an additional coefficient, the correlation coefficient R2, will show how much the change in the index moves the stock price. In this example, the Corporation X share is very weakly dependent on the KASE index, since the correlation coefficient is 0.069.

Consequently, the Long-Term Asset Pricing Model (CAPM) can help determine the selection of stocks for your investment portfolio. This model demonstrates a direct relationship between the risk of a security and its return, which allows it to show fair return relative to the existing risk and vice versa.

In our case, the portfolio is made up of stocks with minimal risk. Investors are considered to have an aversion to risk aversion, so any security other than risk-free government bonds or treasury bills can only count on investor recognition if the level of its expected return compensates for its inherent additional risk.

This premium is called the risk premium, it directly depends on the value of the β-coefficient of the given asset, since it is intended to compensate only for systematic risk.

Non-systematic risk can be eliminated by the investor himself by diversifying his portfolio, so the market does not consider it necessary to set a reward for this type of risk.

Chapter 2. Possibility of applying model variants CAPM

2.1 Two-factor model CAPM in Black's version

As mentioned above, the classic CAPM models in the Sharpe-Lintner or Black version are not, strictly speaking, carried out on the Kazakhstani market. Perhaps the failure in testing the classic versions of the CAPM model is due to the fact that the Kazakhstani market belongs to emerging markets, to which the traditional CAPM model does not fit, since emerging markets are "by definition" less efficient than developed ones, and they do not fulfill the initial assumptions of the model. CAPM. In the literature, other versions of the capital asset valuation model have been proposed, most of them are based on the CAPM model and are a modification of it.

Unfortunately, many popular models are case-by-case modifications and have no economic interpretation.

One of the most plausible and theoretically grounded models is the D-CAPM model proposed by Estrada (2002b, 2002c).

The main difference between the D-CAPM model and the standard CAPM model is the measurement of asset risk. If in the standard model risk is measured by the variance of return, then in the D-CAPM model risk is measured by the semi-variance ( semivariance), which shows the risk of a decrease in profitability relative to the expected or any other level selected as the base.

Semi-variance is a more plausible measure of risk, since investors are not afraid of the possibility of an increase in profitability, investors are afraid of the possibility of a decrease in profitability below a certain level (for example, below the average level).

Based on the semi-variance, an alternative behavioral model based on the new risk dimension can be built, as well as a modified CAPM model can be built. The new pricing model has been called the Downside CAPM, or D-CAPM, in academic publications.

As shown in emerging market returns are better described using the D-CAPM compared to the CAPM. For developed markets, the difference between the two models is much smaller. In this regard, the question arises of the applicability of the D-CAPM model for the Kazakhstani stock market.

Black's model is essentially two-factor. The factors in this case are unobservable traded portfolios: any of the efficient market portfolios and a portfolio orthogonal to it. This can provide another method for validating the model. The idea of ​​the method is as follows. Based on the available time series of returns on various assets, using factor analysis methods, two most significant factors can be identified and abstract portfolios can be formed on the basis of factor coefficients.

If we use the method of principal components to isolate factors, then by definition these factors and, therefore, the formed portfolios will be orthogonal (located at right angles, perpendicular.). Then one of the portfolios can be considered as a market efficient portfolio, the other - as an asset with zero beta. But the model does not work well in emerging markets.

When constructing a standard capital asset pricing model, it is assumed that the distribution of returns is normal. The normal distribution is symmetric and is completely determined by the mean and variance. In the standard behavioral model, investors' actions are influenced by the expectation and variance of returns (standard deviation of returns).

Evidence suggests that the distribution of returns is not symmetrical. It can be assumed that in this case, the actions of investors will be influenced not only by the expected value and dispersion of returns, but also by the coefficient of asymmetry of the distribution.

It is intuitively clear that investors, all other things being equal, prefer distributions with a positive skewness coefficient. The lottery is a good example. As a rule, in lotteries, there is a large win with a low probability and a small loss with a high probability. Many people buy lottery tickets even though their expected income is negative.

According to investors, first of all, they seek to maintain the initial value of their investments and avoid reducing the initial value of the investment below a certain target level. This behavior of investors is consistent with a preference for positive asymmetries.

Hence, assets that reduce portfolio asymmetry are undesirable. Therefore, the expected return on such an asset should include the premium for this risk. Asymmetry can be incorporated into a traditional pricing model. Models that take into account asymmetry are discussed in.

These models assume that, ceteris paribus, investors prefer assets with higher returns, assets with lower standard deviations, and assets with greater asymmetry. Accordingly, we can consider an alternative behavioral model of investors based on three indicators of the distribution of asset returns. The set of efficient portfolios is described in the space of mean, variance and asymmetry. For a given level of variance, there is an inverse relationship between profitability and asymmetry. That is, in order for an investor to hold assets with less asymmetry, they must have a higher return. That is, the premium must be negative.

As with variance, the asset's return is influenced not by the asymmetry of the asset as such, but by the asset's contribution to the asymmetry of the portfolio, that is, coasymmetry. Coasymmetry must have a negative premium. An asset with more coasymmetry should have a lower return than an asset with less coasymmetry.

The results show that skewness helps explain the variation in returns in spatial data and significantly improves the significance of the model. The paper shows that if the markets are fully segmented, then the return is affected by full variance and complete asymmetry. In fully integrated markets, only covariance and coasymmetry matter.

Harvey and Siddique derive the following model that accounts for asymmetry:

where At and Bt are market variance, asymmetry, covariance, and coasymmetry functions. The coefficients At and Bt are similar to the coefficient β in the traditional CAPM model.

Harvey and Siddique ranked stocks by their historical coasymmetry values ​​and formed a portfolio S-, which includes 30% of stocks with the lowest coasymmetry value, 40% of stocks with intermediate coasymmetry values, and an S + portfolio, which includes 30% of stocks with the highest coasymmetry value in relation to the market portfolio.

For econometric verification, the following models were used in the work:

μi = λ0 + λMi + λS βSi + ei

μi = λ0 + λMiβS + λSKS βSKSi + ei

where μi is the average value of the excess of profitability over the risk-free rate (excess profitability), λ0, λMi, λSi are the estimated parameters of the equations, are errors, λSKS, βSKS is the beta coefficient of the standard model, βSi, βSKSi is the beta coefficient of assets in relation to portfolio S- and the spread between the return on portfolios S- and S +.

It is shown that the inclusion of an additional factor significantly increases the correspondence of the model to real data. Thus, it is concluded that in asset pricing models for emerging markets, it is necessary to take into account the level of integration and, possibly, the coasymmetry index.

2.2 Essence of the model D - CAPM

One of the most common ways to modify the standard pricing model is based on the use of semi-variation as a measure of asset risk. In the classical theory, following Markowitz, the dispersion of returns is taken as such a measure, which equally treats both upward and downward deviations from the expected value.

In contrast to the variance, the semi-variation "punishes" only for downward deviations:

The root of the semi-variation is called downside risk- the risk of downward deviation. It should be noted that this measure has its advantages and disadvantages.

Among the shortcomings, we note that the positive side of risk associated with exceeding expectations is discarded. Moreover, such “risk” cannot be used as volatility (volatility), and then also for pricing of derivative financial instruments.

On the other hand, the use of semi-variation in the framework of portfolio theory allows us to weaken some of the assumptions of the traditional pricing model for financial assets (the assumption of a normal distribution of returns and the assumption that investor behavior is determined by the expected return and the variance of asset returns).

It is noted that, firstly, the standard deviation can be used only in the case of a symmetric distribution of returns.

Second, the standard deviation can be used directly as a measure of risk only when the distribution of returns is normal. These conditions are not supported by empirical evidence.

In addition, the use of beta coefficients, which are derived within the framework of the traditional behavioral model, as a measure of risk in emerging markets is disputed by many researchers, the possibility of using half-variation, on the contrary, is supported by empirical data.

The use of semi-variation is supported by intuitive considerations as well. Investors usually do not avoid the risk of higher than average returns, they avoid the risk of lower returns or lower than a certain target value. Since investing in emerging markets is very risky for a Western investor, a Western investor, first of all, avoids the risk of losing the initial value of his investment, or, in accordance with work, avoids reducing this value below a certain target level. Therefore, as a measure of risk in emerging markets, it is advisable to use the half-dispersion and, accordingly, the standard half-deviation.

Studies [Sintsov, 2003] tested a model in which the risk is measured using the lower partial second-order moment, that is, by semi-variation. On the one hand, the use of semi-variation is the most popular modification of the CAPM model; on the other hand, the use of semi-variation allows the use of available statistical methods to empirically test the pricing model.

In this behavioral model, the measure of the interdependence of the profitability of a given asset and a market asset is the so-called semi-variance, which is analogous to the covariance in the standard model:


The semi-variation is also unlimited and scale-dependent. But it can also be normalized by dividing it by the product of the standard semi-deviation of the given asset and the market portfolio:

Similarly, by dividing the covariance by the market portfolio semi-variation, you can find the modified beta coefficient:

The modified beta is used in the alternative pricing model. This model, proposed in the D-CAPM ( Downside Capital Asset Pricing Model):

Thus, the beta coefficient in the traditional CAPM model is proposed to be replaced by a modified beta coefficient, which is a measure of the risk of an asset in a new behavioral model, in which the behavior of investors is determined by the expectation and semi-variance of returns.

The modified beta can be found as the ratio of the semi-variance of the asset and the market portfolio and the semi-variance of the market portfolio. In addition, the modified beta coefficient can be found using regression analysis.

One of the possible imperfections of an emerging market is that strong asymmetries in asset returns are taken into account in the D-CAPM model. It turned out that the modified beta of the D-CAPM model is better suited to describe the average return on the Kazakhstan securities market compared to the standard beta.

The DCAPM model partially solves the problem of underestimating required returns in emerging markets using the standard CAPM model. Therefore, the use of the D-CAPM model in emerging markets seems preferable. There is also a theoretical basis for this as the D-CAPM model has less stringent initial assumptions than the standard CAPM model.

However, rigorous testing shows that the D-CAPM is not consistent with emerging market returns. Thus, none of the capital asset pricing models: the standard Sharpe-Lintner CAPM, Black's CAPM, and D-CAPM match the stock market data.

Perhaps the main reason for the failure to describe the emerging market in simple model terms is the low liquidity of assets. Large spreads in buy and sell quotes are the best reflection of investors' fears about the vast majority of assets. The lack of potential buyers and sellers is a serious risk for any investor with a reasonable investment horizon and, apparently, any model suitable for the market should take this into account.

Chapter 3. Empirical studies of the applicability of the model CAPM in emerging markets

3.1 Criticism of CAPM and alternative risk measures

A number of empirical studies of the 70s of the twentieth century proved the advantages of CAPM in predicting stock returns. Classical works include:,,.

However, criticism of CAPM in academic circles began almost immediately after the publication of works on the model. For example, Richard Roll's work focuses on the problems associated with defining a market portfolio.

In practice, the market portfolio is replaced by a certain maximally diversified portfolio, which is not only available to the investor in the market, but also amenable to analysis (for example, a stock index). The problem with working with such a proxy portfolio is that its choice can significantly affect the calculation results (for example, the beta value).

The works of R. Levy, M. Blum and Scholes-Willims focus on the problem of stability of the key parameter of CAPM - the beta coefficient, which is traditionally estimated using linear regression based on retrospective data using the Ordinary Least Squares (OLS) method.

This is, in fact, the question of the stationarity of the economy and the possibility of constructing risk assessments based on past data. Based on the results of calculations and analysis of the dynamics of the beta coefficient for a number of individual stocks and securities portfolios, R. Levy came to the conclusion that for any stock its beta coefficient is not stable over time and therefore cannot serve as an accurate estimate of future risk. On the other hand, the beta of a portfolio of even 10 randomly selected stocks is fairly robust and therefore can be considered an acceptable measure of portfolio risk. Research by M. Blum showed that over time, the portfolio beta coefficient approaches one, and the company's internal risk approaches the industry average or market average.

An alternative model solution to the problem of stability of the CAPM parameters is the estimates obtained in the market for fixed-term contracts, when the expectations for the prices of financial assets are taken as the basis. This approach is implemented by the MCPM (Market-Derived Capital Pricing Model).

In the work of Benz and Roll, the problem of the correctness of the use of CAPM for small companies is raised, i.e. attention is focused on the problem of size (size effect, small firm effect).

Another area of ​​criticism is the time intervals for calculating the CAPM parameters (the so-called investment horizon problem). Since in most cases the CAPM is used to analyze investments with a horizon of more than one year, the calculations based on annual estimates become dependent on the state of the capital market. If the capital market is efficient (future profitability is not predetermined by past dynamics, stock prices are characterized by a random walk), then the investment horizon is not significant and calculations based on annual indicators are justified. If the capital market cannot be recognized as effective, then the investment time cannot be ignored.

The CAPM thesis about the significance of only systematic risk factors is also problematic. It has been empirically proven that non-systematic variables such as market capitalization or price / earnings ratio influence the required return.

Research in the 1980s and 1990s showed that CAPM beta is unable to explain industry differences in profitability, while company size and other characteristics can.

Another area of ​​criticism concerns the behavior of investors, who often focus on pure risk rather than speculative risk. How

practice shows that investors are ready to invest in assets characterized by positive volatility (i.e., an excess of profitability over the average level). Conversely, investors have a negative perception of assets with negative volatility. Two-sided variance is a function of deviation from the mean, both upward and downward. Therefore, based on the calculation of two-way variance, a stock that is volatile in the upward direction is treated as a risky asset to the same extent as a stock that is volatile in the downward direction.

Empirical research, for example, shows that investor behavior is motivated by an aversion to one-sided negative risk as opposed to overall risk (or two-way variance).

The variance of expected returns is a controversial measure of risk for at least two reasons:

Two-sided variance is a correct measure of risk only for assets for which the expected return has a symmetric distribution

Two-sided variance can only be used directly when the symmetric distribution is normal.

Another critical area relates to assumptions about the probabilistic distribution of the prices and returns of securities. As practice shows, the simultaneous fulfillment of the requirements for the symmetry and normality of the distribution of the expected return on shares is not achieved. The solution to the problem is not to use classical (two-way) variance, but one-way (semivariance frameworks). This decision is justified by the following arguments:

1) the use of one-sided variance is reasonable for different distributions of stock returns: both symmetric and asymmetric.

2) one-sided variance contains information provided by two characteristics of the distribution function: variance and skew coefficient, which makes it possible to use a one-factor model to estimate the expected return on an asset (portfolio).

The problem of asymmetry of profitability in the work is solved through the lower partial moment (LPM) method, which makes it possible to build an equilibrium pricing model of financial assets, known as LPM - CAPM.

In 1974, Hogan and Voren showed analytically that replacing the traditional deviation of portfolio returns with one-sided for risk assessment and moving to the mean-semivariance frameworks does not change the fundamental structure of the CAPM.

3.2 Overview of Empirical Research on Risk-Rewards in Emerging Markets

Specific problems of the CAPM application arise in emerging capital markets, for which it is quite difficult to justify the model parameters (risk-free profitability, market risk premium, beta coefficient) according to the data of the local capital market due to the lack of information efficiency and low liquidity of traded assets.

A number of empirical studies prove the incorrectness of using CAPM in emerging markets as compared to developed ones (for example,,,). A noted feature of emerging markets is the importance of specific risks associated with government policy of economic regulation, with institutional protection of investors and with corporate governance. Due to the correlation between emerging markets and the global capital market, these risks are not mitigated by the diversification of the global investor's capital.

Another problem of emerging markets is the lack of stationarity and dynamic changes associated with the liberalization of local capital markets.

Beckert and Harvey argue that when assessing the required profitability, developed and emerging markets should be considered from different positions, since the degree of integration of the local market into the global financial market should be taken into account. The degree of integration is not constant, it changes over time. This leaves an imprint on the formation of rates of return.

In a 1995 paper, Beckert argues that the presence of barriers to capital flows and international investment automatically means that risk factors in emerging markets are different from those in developed countries.

The paper proves that the level of integration into the world capital market (or the presence of barriers to capital movement) should determine the choice of a model for justifying the cost of equity capital.

An alternative point of view is proved in the work of Rowenhorst. The author came to the conclusion that in terms of factors of influence, there is no difference between developed and emerging markets. The factors explaining the return on equity, which have proven to be significant in developed markets, are significant in emerging markets as well. These factors include:

· The size of the company;

· Variables reflecting the degree of operational and financial risk;

· Liquidity of shares;

· Growth prospects.

Active research on testing CAPM modifications, taking into account the underdeveloped capital markets, was carried out in South America (Argentina, Brazil, Venezuela). The choice of modification is recommended to be linked to the degree of development of the local financial market and its integration into the global capital market.

Scheme 1. CAPM modifications depending on the degree of integration and market segmentation.

The Godfrey-Espinosa model is guided by the calculation of the beta coefficient and the market risk premium based on local market data with the introduction of a country risk premium (CRP) in the adjustment of the global risk-free rate of return, and in order to avoid double-counting of risk, it introduces an adjustment factor (1-R2), where R2 is the coefficient of determination of the regression equation linking the company's profitability in the local market with the variability of the country risk premium.

Gonzales' work tests the CAPM model on a sample of companies listed on the Caracas (Venezuela) stock exchange. Using the regression method on data for a 6-year period (1992-1998), the author comes to the conclusion that the CAPM model does not work in the Venezuelan market.

This conclusion was mainly made due to the rejection of the hypothesis of a positive relationship between risk and stock returns. However, the results of the study by Gonzalez F. showed that, firstly, the relationship between risk (as an indicator of which we used the beta coefficient) and profitability is linear, and, secondly, systematic risk is not the only factor influencing the expected profitability. on equity.

Similar results were obtained in the course of M. Omran's research on the Egyptian capital market. The sample included 41 companies with the most liquid shares. The data panel was generated for the period December 2001 - December 2002. based on logarithmic stock returns obtained from weekly observations.

Empirical tests by Omran M. indicate that market risk is a significant factor in explaining the expected return on Egyptian stocks. The revealed paradox of the study is that the return on a portfolio composed of stocks of companies with low beta coefficients (mainly companies that produce consumer goods and provide financial services) is higher than the return on a portfolio of stocks of companies in the construction, textile and hospitality sectors with more high values ​​of the beta coefficient. According to the author, the reason for this discrepancy is the state nationalization of the 1950s-1960s, which had a greater negative impact on the risks of the industrial and construction sectors of the economy than on companies producing consumer goods, as well as on financial institutions.

Interesting research on emerging markets on the choice of measures of investment risk. As a rule, in such works, testing is carried out within the framework of several models: CAPM and its alternatives. For example, Hwang and Pedersen are testing three models: the classical CAPM and two models that use asymmetric risk measures - LPM-CAPM (LowerPartialMomentCAPM) and ARM (AsymmetricResponseModel).

The peculiarity of alternative models is that, according to the authors, they are suitable for cases of abnormal distribution of returns and an illiquid local capital market. The study was conducted on a sample of 690 companies in emerging markets over a 10-year time frame (April 1992-March 2002). Based on the results of this work, Hwang S. and Pedersen C. concluded that CAPM is not inferior to alternative models in terms of its explanatory ability. On a cross-sectional sample, the explanatory power of CAPM reached 80% on the data panel of weekly and monthly returns, and 55% on the data on daily returns. No significant benefits of asymmetric risk measures were identified. In addition, in their analysis, the authors divided the sample of 26 developing countries by region, and then divided the entire observation period into two intervals - before and after the 1997 Asian crisis.

Due to this, Hwang S. and Pedersen C. identified a significant impact of local risks in emerging capital markets, which is consistent with the results of the work presented above.

Daryl Collins' study tests different risk measures for 42 emerging market countries: systematic (beta), general (standard deviation), idiosyncratic, one-sided (one-sided deviation, one-sided beta, and VaR8), and market size (based on a country's average capitalization ), indicators of skewness and kurtosis.

The testing was carried out using an econometric approach (as in most similar works) from the position of an international investor on a 5-year time interval (January 1996-June 2001) based on weekly returns. Based on the size of the capital market, liquidity and the degree of development, the initial sample of 42 countries was divided into three groups: the first country level with a large capital market (for example, Brazil, South Africa, China), as well as a small market size, but economically and informationally developed; the second level - smaller emerging markets (Russia), the third level - small markets (such as Latvia, Estonia, Kenya, Lithuania, Slovakia, etc.).

According to the results of the study, for some markets, the values ​​of the beta coefficients turned out to be less than expected, which gives a false signal that there is a low risk for investors. The conclusion of the work is that the beta coefficient (and, consequently, the CAPM model) is incorrect to apply for the entire set of developing countries. D. Collins argues that there is no single risk indicator that would be suitable for any developing country.

For countries of the first level, the most appropriate indicator of risk is the coefficient taking into account the market size, for the second level - indicators of one-sided risk (in comparison with others, the VaR indicator showed the best results), the third level - either standard deviation or idiosyncratic risk. Idiosyncratic risk is that part of any financial market that does not depend on the overall level of financial risk that exists in a given economy. Also referred to as unsystematic risk as opposed to systematic risk.

A similar conclusion about the acceptability of various measures of systematic unilateral risk for countries with excellent stock market characteristics is being made in the work. An analysis was made of the applicability of a number of unilateral risk measures (BL, HB, E-beta) for 27 emerging markets (the sample included Asian and Latin American markets, African and Eastern European, including Russia) in the period 1995-2004. The MSCI Emerging Markets Index is used as a global portfolio, and ten-year US government bonds (Tbond) appear as a risk-free rate. It is shown that for markets with a large asymmetry of the distribution of returns (high slope ratio), the most acceptable measure of systematic risk is HB-beta. For markets with significant excess returns observed, BL-beta has an advantage over other risk measures.

An empirical study of the benefits of DCAPM has been carried out for countries with similar geographic and macroeconomic characteristics in Central and Eastern Europe. The analysis of the factors that form the profitability of companies from 8 countries of the former socialist bloc: the Czech Republic, Slovakia, Hungary, Poland, Slovenia, Estonia, Latvia and Lithuania in the 1998-2003 period ... The authors show the importance of unilateral risk measures along with the preservation of the influence of specific factors. risk.

Campbell Harvey investigated the impact of market segmentation on the level of required returns for investors. The paper argues that capital costs in segmented markets will be higher than in integrated markets, as investors will demand more compensation for bearing local, idiosyncratic risk. This suggests that any increase in the degree of financial inclusion should lead to a decrease in the cost of equity.

Rene Stulz proposed diagnostic parameters that allow the country risk premium (CRP) to be included in the global investor's risk-return model.

It is necessary to take into account the degree of integration (the presence of barriers in the movement of capital) and the covariance of returns on the local and global markets. The description of formal and informal barriers to capital movement observed in segmented markets is given in the work.

A number of studies substantively study the impact of capital market liberalization on the cost of equity capital. For example, based on the dividend yield model (Gordon's model), the authors show that the liberalization of segmented capital markets leads to a reduction in equity costs by an average of 50%. A similar study based on an analysis of changes in dividend yields and growth rates for 20 emerging markets (including the countries of South America, Asia and Africa) is presented in the work. The author chose a time date as an external sign of liberalization when foreign investors get the opportunity to buy shares of companies on the local market. The paper shows a decrease in capital costs as a result of liberalization by an average of almost 50%.

The event study method with an assessment of the accumulated excess return based on the dynamics of the prices of depositary receipts (ADR) of 126 companies from 32 local markets made it possible to show for the time period 1985 - 1994. in operation, reducing the cost of equity capital by 42%.

The work of Daryl Collins and Mark Abrahamson analyzes the cost of equity capital using the CAPM model in 8 capital markets of the African continent (Egypt, Kenya, Morocco, etc.) from the perspective of a global investor. The study was carried out with the identification of 10 main sectors of the economy. Two time periods have been identified that characterize the different degrees of openness of the economies (1995-1999 and 1999-2002).

The authors show a decline over time in the risk premium in African capital markets. The largest changes took place in Zimbabwe and Namibia, the least in Egypt, Morocco and Kenya. The average cost of equity for 2002 is about 12% in US dollars. The sectors with the largest weight in the economy show the least cost of capital.

Conclusion

The Long-Term Assets Pricing Model (CAPM) can help you determine the selection of stocks for your investment portfolio. This model demonstrates a direct relationship between the risk of a security and its return, which allows it to show fair return relative to the existing risk and vice versa. Use this long-term asset finance model with other stock picking strategies and methods and you are bound to have a good and profitable portfolio.

The CAPM itself is an elegant scientific theory with a solid mathematical foundation. In order for it to “work”, it is necessary to observe such obviously unrealistic conditions as the presence of an absolutely efficient market, the absence of transaction costs and taxes, equal access of all investors to credit resources, etc. Nevertheless, such an abstract logical construction has received almost universal recognition in the real world finance.

Using CAPM gives the finance manager a tool to predict the costs of raising new capital for implementation investment projects... The finance of any enterprise is an open system, therefore, when planning its capital investments, it must take into account the conjuncture of the financial market. Company managers may know absolutely nothing about the individual characteristics and personal preferences of potential investors. This does not relieve them of the obligation to anticipate the main need of any investor - to receive income that compensates for the investment risk. They can be helped in this by using a financial asset pricing model.

Testing Black's CAPM as a two-factor model has shown that the model is not applicable to emerging markets. However, this testing made it possible to distinguish portfolios that were not observed in Black's version - the market portfolio and the portfolio with zero beta.

It turned out that the first of them consists mainly of corporate securities, the second - of government securities and currencies, which seems quite reasonable and gives some hope for success in the next, more thorough tests of the model.

Perhaps the main reason for the failure to describe the emerging market in simple model terms is the low liquidity of assets. Large spreads in buy and sell quotes are the best reflection of investors' fears about the vast majority of assets. The absence of potential buyers and sellers is a serious risk for any investor with a reasonable investment horizon and, apparently, any model suitable for an emerging market should take this into account.

One of the possible imperfections of an emerging market is that strong asymmetries in asset returns are taken into account in the D-CAPM model. It turned out that the modified beta of the D-CAPM model is better suited to describe the average return in an emerging securities market compared to the standard beta. The DCAPM model partially solves the problem of underestimating required returns in emerging markets using the standard CAPM model. Therefore, the use of the D-CAPM model in an emerging market seems preferable. There is also a theoretical basis for this, since the D-CAPM model has less stringent initial assumptions compared to the standard CAPM model. However, rigorous testing shows that the D-CAPM is not consistent with emerging market returns. Thus, none of the considered capital asset pricing models: the standard CAPM model in the Sharpe-Lintner version, the CAPM model in the Black version, the D-CAPM model does not correspond to the data of the Kazakhstan securities market.

Thus, variations of the CAPM model cannot be applied in the Kazakh capital market until a decent organizational structure of the stock market with full-fledged participants - issuers appears. The main coefficient of this model "beta" is made up of the indicators for securities, which at the moment cannot be adequately calculated.

List of used literature

1.http: //berg.com.ua/fundam/capm/

2.http: //books.efaculty.kiev.ua/fnmen/3/g5/6.htm

3. A.V. Bukhvalov, V.L. Okulov. Classical capital asset pricing models and the Russian financial market. Part 1: Empirical validation of the CAPM model. Scientific reports №… –2006. SPb .: Research Institute of Management SPbSU, 2006

4. A.V. Bukhvalov, V.L. Okulov CLASSICAL PRICING MODELS FOR CAPITAL ASSETS AND THE RUSSIAN FINANCIAL MARKET PART 2. POSSIBILITY OF APPLICATION OF CAPM MODEL VARIANTS № 36 (R) –2006

5. T.V. Teplova, N.V. Selivanova An empirical study of the applicability of the DCAPM model in emerging markets, publication of the journal "Corporate Finance" No. 3-2007

6. Aizin K.I., Livshits V.N. Risk and profitability of securities on stock markets of stationary and non-stationary economy // Audit and the financial analysis, № 4, 2006

Application

Beta coefficient calculation

date KASE Corporation X KASE,% Corporation X,%
17.03.2008 923,23 122,75
18.03.2008 939 118,6 1,71% -3,38%
19.03.2008 960,73 122,5 2,31% 3,29%
20.03.2008 978,96 121 1,90% -1,22%
21.03.2008 957 123 -2,24% 1,65%
24.03.2008 949,14 123,5 -0,82% 0,41%
25.03.2008 947,4 122,75 -0,18% -0,61%
26.03.2008 938,97 122,5 -0,89% -0,20%
27.03.2008 959,31 125 2,17% 2,04%
31.03.2008 981,86 128,5 2,35% 2,80%
01.04.2008 995,57 130 1,40% 1,17%
03.04.2008 1 009,33 130 1,38% 0,00%
04.04.2008 1 003,17 124,65 -0,61% -4,12%
07.04.2008 1 004,37 125 0,12% 0,28%
08.04.2008 1 006,26 125,5 0,19% 0,40%
11.04.2008 1 030,04 125,25 2,36% -0,20%
14.04.2008 1 024,08 121,55 -0,58% -2,95%
15.04.2008 1 035,79 130 1,14% 6,95%

Introduction

Experiential marketing is everywhere. It is present in a wide variety of markets (consumer, industrial, services, technology) and manifests itself in a wide variety of industries. Many organizations are turning to the power of experiential marketing when developing new products, establishing communications with consumers, improving sales relationships, selecting business partners, designing retail space, and creating electronic sites. And this trend continues to grow. Marketers are increasingly moving away from traditional marketing of properties and benefits to creating an experiential experience for their customers.

The purpose of this course work is to consider the theoretical foundations of empirical marketing. To achieve this goal in the course work, the following research tasks are solved: to consider the essence of empirical research and its stages, social research, to study empirical experiences, and also to consider the scope and range of application of empirical marketing.

Empirical Marketing Research

The essence of empirical marketing research

Experiential marketing is about creating connections between a brand and consumers by providing the latter with emotionally and intellectually engaging experiences. This definition reflects the second name of the concept under consideration: experimental marketing (from the English experience - experience, experience).

Speaking about the essence of marketing research, one can unconditionally refer it to scientific research. Indeed, this is basically what it is, since it is aimed at solving clearly set tasks; is intended to provide practice with reliable and valid information; it uses scientific methods of collecting and processing information, etc. And yet there is one circumstance that requires clarification in this regard.

How, for example, does marketing research differ from sociological research, which is based on clear scientific principles? Let us point out the following circumstances that are important in this regard. Marketing research (hereinafter MI) is not only a process of collecting and analyzing information, but also a specific, additional way of communication, a channel of communication with the market of relevant goods, potential partners, consumers. Often, on behalf of the customer, already in the process of conducting a marketing research, a distribution network can be formed (sometimes even with the use of teams of interviewers conducting the research).

It is necessary to distinguish two main types of MI, due to its general orientation, the very goals of research activities.

Theoretical and applied research ("purely" theoretical marketing research, probably, cannot be, since the main economic laws operating in marketing are studied in courses of microeconomics and a number of other similar disciplines), the purpose of which is to identify and search for solution mechanisms marketing problems through the development of new approaches to their study and interpretation. For example, we can talk about the development of new approaches to the classification of consumers, qualitatively new, features of the study of markets, the development of new concepts for product promotion and advertising, etc.

Empirical marketing research is scientific research aimed at obtaining fact-fixing knowledge, that is, establishing and generalizing social facts using direct or indirect registration of events characteristic of the studied social phenomena, objects, processes.

Consider the empirical methods of marketing research:

1. Observation.

This method assumes that the researcher is in close proximity to the object of interest, for example, a moral and legal conflict, and has the ability to see and record all phases of his sociodynamics.

Observation can be included when the researcher takes on the role of one of the participants in the studied fragment of the legal relationship. This provides additional opportunities for delving into the essence of a legal conflict, into the motivational spheres of its participants.

With normal, unencumbered surveillance, the information gathered may not be as profound. But its advantages lie in the fact that, while remaining as an outside observer, the sociologist has more chances to maintain a position of impartiality and objectivity, to avoid elements of evaluativeness in ascertaining judgments.

It is necessary to distinguish between hidden observation, when the participants of the investigated legal relationship are unaware that the sociologist is interested in them, and open observation, when the participants are aware of the research being carried out.

It is also possible to have such a gradation of observations as extensive, when in the field of view there is a sufficiently extensive subject that occupies a significant place in social space, and intensive, when the subject of research attention is not large in volume and completely fits into the field of vision of one sociologist. In the second case, observation turns out to be extremely focused and more effective.

The observation method is convenient for studying local, fragmented, small in volume and number of participants and easily accessible objects.

2. Analysis of documents.

When a certain legal reality is out of reach for direct empirical study (for example, it disappeared from the present and remained in the historical past), but some single texts or complexes of written documents remained from it, then for a sociologist these latter can serve as a source of information. Documents as artifacts, that is, artificial, secondary facts, are capable of testifying to the actual, once existing, primary facts of the legal life of society and the individual. Various legislative acts, codes, government decrees, protocols of investigative actions and court proceedings, written testimonies of participants in various legal procedures, as well as publicistic and artistic works covering various legal problems - all this can provide sociologists with the necessary information. When analyzed, the sociologist becomes a lawyer, and a lawyer becomes a sociologist. One and the same event looks for the former as a typical social fact, and for the latter as a characteristic legal phenomenon-incident. Taken together, both of these views, sociological and legal, provide a three-dimensional image of the investigated socio-legal reality, allow us to fix in it such properties and boundaries, by which the researchers, acting separately, could pass without noticing them. The dignity of the sociology of law as a theoretical discipline lies precisely in the fact that its representative develops in himself at the same time strengths and a sociologist and a lawyer.

If the documents are not of a purely legal nature, but due to certain circumstances are of interest to a sociologist, then he is faced with the difficult task of identifying purely legal information from their contexts. One of the means for solving this problem is content analysis. It is used in the presence of voluminous textual material in order to reveal in it the number of certain content-semantic units. For example, an analysis of all issues of the central newspaper Pravda in 1937 and counting the total number of executions of "enemies of the people" about which it reports can provide quite eloquent information about the state of the Soviet justice system, about the degree of its civilization, humanity, and justice. Legal sociologists often refer to the example of content analysis related to the activities of the International Institute for Human Rights in Strasbourg. In 1971, his staff made an attempt to identify the words that are most often found in official legal documents at the state and international level. The first three places were taken by the words law, equality and freedom, respectively. That is, the priority values ​​of the political and legal activities of the international community have been identified, which serve as guidelines for the practical efforts of states and peoples [4].

To collect information characterizing the state of various, archaic and modern forms of customary law, folklore - ancient myths, folk tales, legends, as well as various ethnographic materials can be of certain interest.

Using this method we are talking about a scrupulous methodical reading of texts according to a pre-compiled multi-stage program. In such cases, the required information can be collected literally bit by bit and for quite a long time.

Sociologists of law, endowed with artistic flair, can successfully work with the literary works of prominent writers as sources of social and legal information. Thus, French sociologists are inclined to believe that solid material on the sociology of property is found in the novels of O. Balzac's The Human Comedy, and in the sociology of the family - in E. Zola's multivolume epic Rougon-Maccara. Similarly, for Russian (and not only Russian) sociologists an invaluable source of information on the sociology of law and crime is the work of F.M.Dostoevsky.

Document analysis is important when sociologists deal with structures of the legal system whose activities are strictly documented. If we take into account that a certain part of this documentation is of a closed nature and the information contained in them is designed for specific professional women, then it should be recognized that law enforcement agencies need their own sociological staff. They could, with their analytical studies, provide substantial assistance in the self-improvement of the legal system.

3. Poll (interviewing, questioning, testing).

Among the sociological methods of collecting primary information, the survey occupies an important place. It is used in cases where surveillance is impossible or impractical. It is used when necessary to identify the state of public opinion regarding any significant events in the social and legal life of the state and civil society. Polls are important on the eve of such events in the political and legal life of society, such as referendums. Covering a small number of citizens, they are test measurements of the state of public opinion and a kind of rehearsals for the upcoming

An interview looks like a personal conversation between a sociologist and a person of interest. Such a conversation most often has the character of questions and answers and can take place either in person or by phone. The received answers are recorded, processed, summarized, correlated with the results of other similar interviews.

The interview conversation can be recorded on a tape recorder. The recording itself can be used in different ways, depending on the research attitudes of the interviewer. For example, in 1959, the American Truman Capote published a book that was created on the basis of tape recordings of his conversations with two young convicted criminals. After some time, her Russian translation came out (Ordinary Murder. - M., 1965).

Questioning differs from interviewing in that it can be not only individual, but also group. In addition, it presupposes that the sociologist has a pre-compiled questionnaire-questionnaire. Its advantage lies in the fact that it allows you to simultaneously interview a large number of people. Another clear advantage of the questionnaire is that it can be not only nominal, but also anonymous. For sociologists, this second option is often preferable to the first, since it allows respondents to give sincere answers to questions that are called "delicate" in common parlance.

Testing is a complicated questionnaire technique. Experts compose a special kind of questionnaire (test), which contains a significant number of dissimilar questions. The purpose of the test is to force the respondent to "speak out" or "let slip", that is, to give answers to those questions that he would not answer in a regular interview or questionnaire. At the same time, tests make it possible to reveal the unconscious attitudes of individuals, hidden even from their own understanding.

This technique is important in the study of the motivational sphere of individual legal consciousness. She holds great promise in criminological research.

4. Sociological experiment.

To confirm their hypothesis, to refute the assumptions that contradict it, sociologists can simulate the socio-legal situation they need. A model of this kind can be either completely real, that is, situational-empirical, or mental, imaginary.

Legal relationships are a sphere to which individuals are very painful and to all manifestations of which they react extremely sharply. It is always very difficult to carry out any real experiments on its "territory". As for the thought experiments in the field of law, here culture came to the rescue of sociologists. For a long time, there have been talented dramas, novels, short stories created by brilliant word artists, exploring the most diverse aspects of legal relations, the most complex structures of individual legal and criminal consciousness. Created by the play of creative imagination, they are nothing more than thought experiments. And sociologists, of course, should not ignore classical and contemporary works of fiction with a legal and criminological focus. At the same time, they will have to use the method of sociological analysis of documents in the conditions of thought experiments that were not themselves set up, in the role of which in this case will be literary texts.

5. Biographical method.

It can be attributed to the analysis of documents, but it can also be considered an independent method. It is a way of studying biographical data in order to collect the necessary information of a psychological, sociological, moral and legal nature. The biographical method allows one to formulate hypotheses and find evidence regarding the peculiarities of the attitude of a particular person or a certain category of individuals to certain socio-legal phenomena and processes, as well as make assumptions about the nature of their legal consciousness and about the typical features of their socio-legal behavior.

The active use of the biographical method in modern socio-legal theory began in the first decades of the 20th century. and is connected with the publication of works by W. Healy "The Criminal" (1915) and "Mental Conflicts and Misbehavior" (1917), Z. Freud "Dostoevsky and Parricide" (1928). Many Western researchers, including F. Znanetsky, C. Cooley, G. D. Lasswell, D. G. Mead, W. A. ​​Thomas, turn to the study of personal documents, letters, diaries in order to acquire reliable information about the motives of social behavior of the people of interest to them. By analyzing family relations, heredity and continuity of generations, human actions in critical life situations, relationships with others, not only conscious, but also unconscious inclinations are revealed that have affected the features of both law-abiding and illegal behavior of individuals.

Empirical data of a biographical nature, together with the general logic of inductive-deductive constructions, make it possible to reconstruct the most complex motivational collisions of the inner life of individuals who find themselves in extraordinary conditions of suicide, crime, imprisonment, etc.

Conclusion: the above methods can be used in different proportions, in relation to different socio-legal material and take shape in each individual case in a special model of research activity. Let's designate the most significant of these models:

1. Pilot study.

Its essence lies in the fact that it has an exploratory character and allows researchers to test their tools on a small area of ​​the problem field of interest to them. This is a kind of micro-model of the future full-scale research. Its task is to identify the weaknesses of the conceived program, make the necessary adjustments in advance, clarify the initial premises of the hypothesis, more accurately outline the boundaries of the subject under study, and more clearly identify the problem and the tasks arising from it.

2. Descriptive research.

This type of research includes a comprehensive, as complete as possible, description of the legal phenomenon. Its features, structural and content properties, and functional capabilities are revealed. At the same time, researchers are in no hurry with final estimates, generalizations and conclusions. Their task is to create the necessary empirical prerequisites for all this.

3. Analytical research.

This is the most complex and in-depth version of scientific research, not limited to sliding along the phenomenal surface of socio-legal realities. The task here is to move from phenomena into the depths of the problem, to the essential parameters of the recorded socio-legal phenomena and facts, to the reasons and grounds for their occurrence and to the conditions of functioning.

The results of analytical studies are of the greatest scientific value and practical significance. Based on them, the customers for whom this work was carried out take certain practical steps to correct, reorganize, and improve specific areas of social and legal reality.

Consider empirical social research in France in the second half of the XLX - the beginning of the XX century.

Speaking about the prehistory of empirical sociology in the 19th century, the special role of the French scientist and politician Frederic Le Play (1806-1882) should be noted. Empirical sociology is a direction focused on the study of specific facts of social life using special methods. He attached particular importance to the collection of specific social facts, only on the basis of which he considered it possible to draw well-grounded conclusions. The main work of F. Le Play - "European Workers" - was published in 1855. It contains the results of a study of working families, their budgets, which for the French researcher acted as the main indicator of the standard and way of life. Le Play investigated various sources of workers' income with the help of statistical methods, on the basis of the results obtained, he proposed his program of social reform of their situation. His approach has not yet lost its significance in the preparation and conduct of empirical social research of various professional groups of the population.

Le Play's main area of ​​scientific interest is the study of the family. He was one of the first to investigate it using empirical methods. Chief among them were the statistical ones. However, Le Play resorted to using others, including direct observation of individual special cases and their subsequent description. The French researcher considered the family as the main unit of the social system of society, in which it fully reproduces itself.

Le Play developed a methodology and methodology for the theoretical and empirical study of the family. Within the framework of the first - theoretical - direction, he formulated a typology of the family, including its three main varieties: patriarchal, unstable, intermediate. The first type is a traditional three-generation family with common property and domination of the father (patriarch), characteristic of the village and the peasantry. The unstable type is a two-generation nuclear isolated family, most common in the upper strata of society and among the industrial working class. The intermediate type is a kind of patriarchal family in which one of the heirs owns the household, and the rest receive their money share, which creates the opportunity for them to work elsewhere. At the same time, ties with the parental home are not cut off, it continues to remain a "fortress" in which you can always hide from the difficulties and upheavals of life.

In his theoretical analysis of the family, Le Play drew attention to it as the main factor of social control. He believed that the state can carry out its internal policy and successfully solve the main tasks of management, only relying on the family and creating the necessary conditions for its functioning and stable development.

As for the method of empirical research of the family, it received the most complete expression within the framework of the monographic method developed by Le Play, of which he is one of the founders. In the book "European Workers", he left a description of the lifestyle of 57 families employed in various fields of activity - agricultural, handicraft, handicraft, industrial, living in a number of regions of several European countries, differing in their level of economic development. The main focus of this description was on the family budget, the structure of income and expenses, i.e. what could be clearly quantified.

The monographic method of research opposed the historical method developed at that time. The essence of the first consisted in combining the theoretical analysis of the family with specific data and materials about her life, i.e. with what we would now call field data. At the same time, the emphasis was placed on considering the factors of the social environment of the family, which had a profound effect on it. Here is both place of residence and character labor activity family members, and income levels, etc. The student of Le Play, Henri de Tourville, did especially a lot in terms of using the monographic method to study the family.

In general, Le Play was an ardent supporter of the preservation and strengthening of the family, and in its traditional norms. - For this, he proposed adopting a series of laws against the fragmentation of family property and family property, stimulating the development of family production. The scientist attached particular importance to the preservation of family property, which acts not only as a means of production, but also as a strong moral factor and a tool for the development of family continuity. Since he believed that the family is the only structural unit of society that is able to protect the employee from market fluctuations, and the person from social storms and hardships, insofar as the reforms of society, which he called for implementation, should affect, in his opinion, in the first place families .

Considering the development of empirical social research in France in the 19th - early 20th centuries, one cannot ignore the scientific research in this area by G. Tarde. They are associated, first of all, with the study of the social aspects of crime, clarification of its causes as a social phenomenon and analysis of the effectiveness of the French penitentiary system. The subject of such attention and interest of Tarde should not be surprised, one must take into account the fact that he worked for 25 years in the justice authorities, mainly in the courts, and carefully studied the problems of criminology. The result of his theoretical and empirical research was a "portrait" - both physical and social - of a criminal offender. It contained a number of traits and features (both physical and social) of the delinquent (criminal) type of personality, which was the result of the analysis of a large number of criminal cases.

Based on the generalization of significant statistical material about the crimes committed, Tarde also considered the issue of the prevalence and social characteristics of certain types of crimes not only in France, but also in the world as a whole. In particular, he concluded that the number of bloody crimes committed in countries with a warm climate is increasing, and with a cold climate it is decreasing.

The works of Tarde and Le Play were important for the subsequent development of not only empirical sociology, but also criminology, whose representatives still highly appreciate the ideas and works of French scientists.

Specific problems of the CAPM application arise in emerging capital markets, for which it is quite difficult to justify the model parameters (risk-free profitability, market risk premium, beta coefficient) according to the data of the local capital market due to the lack of information efficiency and low liquidity of traded assets.

A number of empirical studies prove the incorrectness of using CAPM in emerging markets as compared to developed ones (for example,,,). A noted feature of emerging markets is the importance of specific risks associated with government policy of economic regulation, with institutional protection of investors and with corporate governance. Due to the correlation between emerging markets and the global capital market, these risks are not mitigated by the diversification of the global investor's capital.

Another problem of emerging markets is the lack of stationarity and dynamic changes associated with the liberalization of local capital markets.

Beckert and Harvey argue that when assessing the required profitability, developed and emerging markets should be considered from different positions, since the degree of integration of the local market into the global financial market should be taken into account. The degree of integration is not constant, it changes over time. This leaves an imprint on the formation of rates of return.

In a 1995 paper, Beckert argues that the presence of barriers to capital flows and international investment automatically means that risk factors in emerging markets are different from those in developed countries.

The paper proves that the level of integration into the world capital market (or the presence of barriers to capital movement) should determine the choice of a model for justifying the cost of equity capital.

An alternative point of view is proved in the work of Rowenhorst. The author came to the conclusion that in terms of factors of influence, there is no difference between developed and emerging markets. The factors explaining the return on equity, which have proven to be significant in developed markets, are significant in emerging markets as well. These factors include:

· The size of the company;

· Variables reflecting the degree of operational and financial risk;

· Liquidity of shares;

· Growth prospects.

Active research on testing CAPM modifications, taking into account the underdeveloped capital markets, was carried out in South America (Argentina, Brazil, Venezuela). The choice of modification is recommended to be linked to the degree of development of the local financial market and its integration into the global capital market.

Scheme 1. CAPM modifications depending on the degree of integration and market segmentation.

The Godfrey-Espinosa model is guided by the calculation of the beta coefficient and the market risk premium based on local market data with the introduction of a country risk premium (CRP) in the adjustment of the global risk-free rate of return, and in order to avoid double-counting of risk, it introduces an adjustment factor (1-R2), where R2 is the coefficient of determination of the regression equation linking the company's profitability in the local market with the variability of the country risk premium.

Gonzales' work tests the CAPM model on a sample of companies listed on the Caracas (Venezuela) stock exchange. Using the regression method on data for a 6-year period (1992-1998), the author comes to the conclusion that the CAPM model does not work in the Venezuelan market.

This conclusion was mainly made due to the rejection of the hypothesis of a positive relationship between risk and stock returns. However, the results of the study by Gonzalez F. showed that, firstly, the relationship between risk (as an indicator of which we used the beta coefficient) and profitability is linear, and, secondly, systematic risk is not the only factor influencing the expected profitability. on equity.

Similar results were obtained in the course of M. Omran's research on the Egyptian capital market. The sample included 41 companies with the most liquid shares. The data panel was generated for the period December 2001 - December 2002. based on logarithmic stock returns obtained from weekly observations.

Empirical tests by Omran M. indicate that market risk is a significant factor in explaining the expected return on Egyptian stocks. The revealed paradox of the study is that the return on a portfolio composed of stocks of companies with low beta coefficients (mainly companies that produce consumer goods and provide financial services) is higher than the return on a portfolio of stocks of companies in the construction, textile and hospitality sectors with more high values ​​of the beta coefficient. According to the author, the reason for this discrepancy is the state nationalization of the 1950s-1960s, which had a greater negative impact on the risks of the industrial and construction sectors of the economy than on companies producing consumer goods, as well as on financial institutions.

Interesting research on emerging markets on the choice of measures of investment risk. As a rule, in such works, testing is carried out within the framework of several models: CAPM and its alternatives. For example, Hwang and Pedersen are testing three models: the classical CAPM and two models that use asymmetric risk measures - LPM-CAPM (Lower Partial Moment CAPM) and ARM (Asymmetric Response Model).

The peculiarity of alternative models is that, according to the authors, they are suitable for cases of abnormal distribution of returns and an illiquid local capital market. The study was conducted on a sample of 690 companies in emerging markets over a 10-year time frame (April 1992-March 2002). Based on the results of this work, Hwang S. and Pedersen C. concluded that CAPM is not inferior to alternative models in terms of its explanatory ability. On a cross-sectional sample, the explanatory power of CAPM reached 80% on the data panel of weekly and monthly returns, and 55% on the data on daily returns. No significant benefits of asymmetric risk measures were identified. In addition, in their analysis, the authors divided the sample of 26 developing countries by region, and then divided the entire observation period into two intervals - before and after the 1997 Asian crisis.

Due to this, Hwang S. and Pedersen C. identified a significant impact of local risks in emerging capital markets, which is consistent with the results of the work presented above.

Daryl Collins' study tests different risk measures for 42 emerging market countries: systematic (beta), general (standard deviation), idiosyncratic, one-sided (one-sided deviation, one-sided beta, and VaR8), and market size (based on a country's average capitalization ), indicators of skewness and kurtosis.

The testing was carried out using an econometric approach (as in most similar works) from the position of an international investor on a 5-year time interval (January 1996-June 2001) based on weekly returns. Depending on the size of the capital market, liquidity and the degree of development, the initial sample of 42 countries was divided into three groups: the first tier - countries with a large capital market (for example, Brazil, South Africa, China), as well as with a small market size, but economically and informationally developed; the second level - smaller emerging markets (Russia), the third level - small markets (such as Latvia, Estonia, Kenya, Lithuania, Slovakia, etc.).

According to the results of the study, for some markets, the values ​​of the beta coefficients turned out to be less than expected, which gives a false signal that there is a low risk for investors. The conclusion of the work is that the beta coefficient (and, consequently, the CAPM model) is incorrect to apply for the entire set of developing countries. D. Collins argues that there is no single risk indicator that would be suitable for any developing country.

For countries of the first level, the most appropriate indicator of risk is the coefficient taking into account the market size, for the second level - indicators of one-sided risk (in comparison with others, the VaR indicator showed the best results), the third level - either standard deviation or idiosyncratic risk. Idiosyncratic risk is that part of any financial market that does not depend on the overall level of financial risk that exists in a given economy. Also referred to as unsystematic risk as opposed to systematic risk.

A similar conclusion about the acceptability of various measures of systematic unilateral risk for countries with excellent stock market characteristics is being made in the work. An analysis was made of the applicability of a number of unilateral risk measures (BL, HB, E-beta) for 27 emerging markets (the sample included Asian and Latin American markets, African and Eastern European, including Russia) in the period 1995-2004. The MSCI Emerging Markets Index is used as a global portfolio, and ten-year US government bonds (Tbond) appear as a risk-free rate. It is shown that for markets with a large asymmetry of the distribution of returns (high slope ratio), the most acceptable measure of systematic risk is HB-beta. For markets with significant excess returns observed, BL-beta has an advantage over other risk measures.

An empirical study of the benefits of DCAPM has been carried out for countries with similar geographic and macroeconomic characteristics in Central and Eastern Europe. The analysis of the factors that form the profitability of companies from 8 countries of the former socialist bloc: the Czech Republic, Slovakia, Hungary, Poland, Slovenia, Estonia, Latvia and Lithuania in the time period 1998-2003 is carried out. The authors show the importance of unilateral risk measures along with the preservation of the influence of factors specific risk.

Campbell Harvey investigated the impact of market segmentation on the level of required returns for investors. The paper argues that capital costs in segmented markets will be higher than in integrated markets, as investors will demand more compensation for bearing local, idiosyncratic risk. This suggests that any increase in the degree of financial inclusion should lead to a decrease in the cost of equity.

Rene Stulz proposed diagnostic parameters that allow the country risk premium (CRP) to be included in the global investor's risk-return model.

It is necessary to take into account the degree of integration (the presence of barriers in the movement of capital) and the covariance of returns on the local and global markets. The description of formal and informal barriers to capital movement observed in segmented markets is given in the work.

A number of studies substantively study the impact of capital market liberalization on the cost of equity capital. For example, based on the dividend yield model (Gordon's model), the authors show that the liberalization of segmented capital markets leads to a reduction in equity costs by an average of 50%. A similar study based on an analysis of changes in dividend yields and growth rates for 20 emerging markets (including the countries of South America, Asia and Africa) is presented in the work. The author chose a time date as an external sign of liberalization when foreign investors get the opportunity to buy shares of companies on the local market. The paper shows a decrease in capital costs as a result of liberalization by an average of almost 50%.

The event study method with an assessment of the accumulated excess return based on the dynamics of the prices of depositary receipts (ADR) of 126 companies from 32 local markets made it possible to show for the time period 1985 - 1994. in operation, reducing the cost of equity capital by 42%.

The work of Daryl Collins and Mark Abrahamson analyzes the cost of equity capital using the CAPM model in 8 capital markets of the African continent (Egypt, Kenya, Morocco, etc.) from the perspective of a global investor. The study was conducted with the identification of 10 main sectors of the economy. Two time periods have been identified that characterize the different degrees of openness of the economies (1995-1999 and 1999-2002).

The authors show a decline over time in the risk premium in African capital markets. The greatest changes have occurred in Zimbabwe and Namibia, the least in Egypt, Morocco and Kenya. The average cost of equity for 2002 is about 12% in US dollars. The sectors with the largest weight in the economy show the least cost of capital.

Three decades ago, empirical research on industry market theory was similar to work in other applied areas of economics. Around the same time that industry market economists were using multi-industry regression analysis to solve problems of endogeneity, missing variables, and inverse causality, others, particularly in labor economics, were constructing wage regression equations and faced similar problems. ... Since then, empirical research into the theory of industry markets has advanced significantly, and in a certain direction - towards the analysis of individual industries, which allows scientists to achieve clearer measurements and identification, as well as towards research in which empirical analysis is presented in the language of one of the theories. describing the relevant industry, or a set of competing theories.
Today, the development of applied microeconomics is described as a movement towards the use of randomized experiments (quasi-experiments) to establish cause-and-effect relationships. However, there has been criticism of empirical research on industry markets. One question is what constitutes an acceptable identification variation in the data. The ideal here is not controversial: each researcher would like to first identify the object of interest and then design the ideal experiment to subject that object to measurement. However, in the absence of such an opportunity, researchers have to make a compromise, which usually consists in choosing between the accuracy of the measurement and how well the measured object replaces the object that was originally of interest to the researcher.
Imagine you want to measure consumer demand for cornflakes. Ideally, one would like to know the exact (n x n) cross-price elasticity matrix of demand. A common approach in industry market research is to take into account possible shifts in demand and rely on the assumption that residual price correlations across cities are due to changes in costs. An alternative approach is to look at specific cases where prices have changed for understandable but likely external reasons, such as when the Froot Loops pricing algorithm crashed, causing the company's product to be sold at reduced prices for a week. These cases are used to estimate the price elasticity of demand for Froot Loops branded corn flakes. The researcher can then extrapolate the findings, suggesting that a price change for a similar variety of cereal, or perhaps any other type of cereal, would produce similar results. In this context, none of the approaches presented is ideal. The first approach relies on a dubious identification assumption, but designs the experiment to directly investigate the object of interest. The second approach assumes a more accurate measurement of several matrix elements, but dubious extrapolation is required to assess the object of direct interest to the researcher.
One problem with economists trying to abstractly assess a line of research is that the result is a highly simplified picture. The decision about the mentioned compromise almost always depends on which question the researcher wants to answer; from the data available when answering this question; on the extent to which economic theory allows you to accept the premises about the relationship between data parameters. In our opinion, in the theory of industrial markets, different approaches are possible even when answering the same question. Indeed, since research can only bring scientists closer to the desired answer to a certain extent, approaches from different points of view often complement each other.
A second question about the evolution of industry market theory relates to the role of economics in empirical research. It is safe to say that professional economists (at least the majority) regard economic theory as a prism through which it is convenient to look at the world. Therefore, it is surprising that the opinion that economic theory does not play a significant role in the interpretation and analysis of data obtained as a result of empirical research. Instead, it is proposed to use measurement strategies without the constraints arising from the application of theoretical models. Transparency may be one explanation; the authors seem to equate the use of economics with complex modeling that obfuscates the data. However, such an identification appears to be false in the sense that one can have a perfectly clear analysis of the model based on economic theory and an equally confusing analysis of linear regression.
It seems more natural to start your investigation by asking the question you want to answer and then asking to what extent economics can shed light on the problem. Standard economics does not explain the behavior of third grade students. Therefore, while one can start by creating an equilibrium learning model for schoolchildren and use it to structure empirical tests, there is a strong case for using a statistical approach, since there are thousands of third grades with approximately the same curriculum and many opportunities to find interesting variations in the number of students per class for assessment. the influence of this parameter on learning outcomes.
However, this paradigm is not always suitable for industry market research. Industry organization theory primarily studies the functioning of firms and markets — an area in which economics has a wealth of data — and, when used appropriately, tends to clarify rather than confuse researchers' understanding of markets. Moreover, for many studies, the main interest is not just the causal relationship itself, but the understanding of the operating mechanisms. As has been repeatedly emphasized, markets differ significantly from each other, and the use of any data (on the elasticity of demand, production costs or policy effects) for different markets is often not justified. If a scientist seeks to generalize his data, it is more compelling to use empirical research to substantiate the principles of strategic interaction or the functioning of markets that are widely applicable in different industries.
A specific example from which empirical research in industry market theory is criticized — merger analysis — illustrates this point. As described above, theoretical models of industry competition are usually used in evaluating a proposed merger or acquisition. Researchers have spent a lot of time developing econometric tools to quantify possible outcomes in the context of such models. This work is criticized as having a distant relationship to the question posed. Instead, critics ask, why not devote more time to retrospective analysis of past mergers? On the one hand, this is a useful objection: retrospective analysis can be useful. On the other hand, it is completely wrong. Do they seriously think that if the US Department of Justice were to consider the proposed merger between Microsoft and Yahoo !, its employees should study the impact on prices of previous airline or stationery mergers, or even mergers that have occurred as a result of Accidental meetings of heads of companies or as a result of lunar eclipses? It is far more productive to lay down a clear conceptual framework for analyzing the possible impact of a merger by judiciously using the best available data.
However, the main takeaway is that economic theory and the search for convincing sources of identification changes in data do not contradict each other. Thus, if it is necessary to attend to the further development of the theory of industrial markets, it is about the lack of emphasis on such applied research - in comparison with the ever-expanding set of econometric methods. Of course, more effective methods are of great value - provided that they are ultimately used for their intended purpose and do not become an end in themselves. Returning to the literature on demand assessment: perhaps one of the reasons why researchers are willing to tolerate far from ideal price variations is that in some cases the result of the work is not an estimate of price elasticity itself, but an econometric method that can be applied more wide. While these results are not particularly objectionable, it is important that industry market theory finds a reasonable balance between the development of tools and their convincing use. It is debatable whether industry market theory has strayed too far, but the very fact that there is a lot of controversy on this issue is thought-provoking.
Historically speaking, the current situation can be explained by data limitations. Indeed, the most significant methodological breakthroughs in industry market theory have been responses to this problem. In the innovative model of market entry, the solution is found due to the lack of data on prices and quantities of goods. The new demand assessment method has gained so much credibility in part because it requires only market-level data, not individual level data.
Today almost every big company collects a huge amount of data about its customers, employees and other aspects of its functioning. It is becoming easier to collect data on prices and quantities of goods, market entry and exit, location of firms, and accounting information. Increasing the data base could replace some methods and lead economists studying industry market theory to focus more on applying existing methods than developing new ones.
Another problem is that the emphasis on elegance can lead to less important issues. For example, suppose we are selecting questions to research the internet platform eBay. If the research object is a potential laboratory for applying elegant empirical methods of auction theory, then a natural step would be to focus on narrower market segments in order to isolate the specific features of the auction format. While this approach can be fruitful, it can distract researchers from broader questions such as why eBay is so successful as an organization, or how eBay competes with other platforms that connect buyers to sellers, such as Amazon. In fact, economists of all disciplines should resist the temptation to tackle only one type of problem.
The last and important problem for the future theory of industrial markets is associated with the transition from the analysis of a group of industries to the study of individual industries. Over 20 years of research, we have accumulated a huge stock of knowledge about the functioning of individual industries, but this knowledge is extremely fragmented. Economists have studied in detail the functioning of industries such as automotive, commercial aviation, electricity, and the production of cement and concrete (these are different industries!). However, the acquired knowledge cannot be easily transferred from one industry to another. As a result, the study of many interesting and important issues related to the general organization of production, the theory of industrial markets gave way to other areas, in particular the theory of trade and macroeconomics.

 


Read:



Life expectancy of chickens of different breeds at home

Life expectancy of chickens of different breeds at home

How long do chickens live at home? Apparently, this question is not for nothing that people are interested in - you will not find chickens in the wild, but at home ...

How does the common quail live and where does it live?

How does the common quail live and where does it live?

Content In addition, there are decorative quail breeds that can decorate your site and serve as interesting and exotic ...

What is franchising Conclusion on the advantages and disadvantages of franchising

What is franchising Conclusion on the advantages and disadvantages of franchising

Franchising is an effective way to monetize your business. It is beneficial for both parties who have entered into a commercial concession agreement. This kind of market ...

Ethical and social responsibility of business

Ethical and social responsibility of business

Entering the sphere of business, each organization acquires a certain legal status, which determines both the type of activity and the legal ...

feed-image Rss