Advantages And Disadvantages Of Cocomo Model In Software Engineering

Posted on -
Advantages And Disadvantages Of Cocomo Model In Software Engineering 4,8/5 9035 votes

The Comparison of the Software Cost Estimating Methods The Comparison of the Software Cost Estimating Methods Liming Wu University of Calgary Email:wul@cpsc.ucalgary.ca Abstract Practitioners have expressed concern over their inability to estimate accurately costs associated with software development. This concern has become even more pressing as cost associated development continue to increase.

  1. Shivangi Shekhar

Improving the Use Case Point and COCOMO with Expert Judgment and. Advantages and disadvantages. In the model we have. Software engineering seems. Advantages of COCOMO estimating model are. Disadvantages of COCOMO estimating model. Advantages: a. It provides a set of software development management tools.

Considerable studies are now directed at constructing,evaluating and selecting better software cost estimation models and tools for specific software development projects. This essay gives an overview of cost estimation models and then discusses their advantages and disadvantages.

Finally, the guidelines for selecting appropriate cost estimation models are given and a combination method is recommended. Table of contents. 6. It has been surveyed that nearly one-third projects overrun their budget and late delivered and two-thirds of all major projects substantially overrun their original estimates.

Shivangi Shekhar

The accurate prediction of software development costs is a critical issue to make the good management decisions and accurately determining how much effort and time a project required for both project managers as well as system analysts and developers. Without reasonably accurate cost estimation capability, project managers can not determine how much time and manpower cost the project should take and that means the software portion of the project is out of control from its beginning; system analysts can not make realistic hardware-software tradeoff analyses during the system design phase; software project personnel can not tell managers and customers that their proposed budget and schedule are unrealistic. This may lead to optimistic over promising on software development and the inevitable overruns and performance compromises as a consequence. But, actually huge overruns resulting from inaccurate estimates are believed to occur frequently. The overall process of developing a cost estimate for software is not different from the process for estimating any other element of cost. There are, however, aspects of the process that are peculiar to software estimating. Some of the unique aspects of software estimating are driven by the nature of software as a product.

Other problems are created by the nature of the estimating methodologies. Software cost estimation is a continuing activity which starts at the proposal stage and continues through the lift time of a project. Continual cost estimation is to ensure that the spending is in line with the budget. Cost estimation is one of the most challenging tasks in project management. It is to accurately estimate needed resources and required schedules for software development projects. The software estimation process includes estimating the size of the software product to be produced, estimating the effort required, developing preliminary project schedules, and finally, estimating overall cost of the project.

It is very difficult to estimate the cost of software development. Many of the problems that plague the development effort itself are responsible for the difficulty encountered in estimating that effort. One of the first steps in any estimate is to understand and define the system to be estimated. Software, however, is intangible, invisible, and intractable. It is inherently more difficult to understand and estimate a product or process that cannot be seen and touched. Software grows and changes as it is written.

When hardware design has been inadequate, or when hardware fails to perform as expected, the 'solution' is often attempted through changes to the software. This change may occur late in the development process, and sometimes results in unanticipated software growth. After 20 years research, there are many software cost estimation methods available including algorithmic methods, estimating by analogy, expert judgment method, price to win method, top-down method, and bottom-up method. No one method is necessarily better or worse than the other, in fact, their strengths and weaknesses are often complimentary to each other. To understand their strengths and weaknesses is very important when you want to estimate your projects. Expert judgment techniques involve consulting with software cost estimation expert or a group of the experts to use their experience and understanding of the proposed project to arrive at an estimate of its cost.

Generally speaking, a group consensus technique, Delphi technique, is the best way to be used. The strengths and weaknesses are complementary to the strengths and weaknesses of algorithmic method. To provide a sufficiently broad communication bandwidth for the experts to exchange the volume of information necessary to calibrate their estimates with those of the other experts, a wideband Delphi technique is introduced over standard Deliphi technique.

The estimating steps using this method:. Coordinator present each expert with a specification and an estimation form.

Coordinator calls a group meeting in which the experts discuss estimation issues with the coordinator and each other. Experts fill out forms anonymously.

Coordinator prepares and distributes a summary of the estimation on an iteration form. Coordinator calls a group meeting, specially focusing on having the experts discuss points where their estimates varied widely.

Experts fill out forms, again anonymously, and steps 4 and 6 are iterated for as many rounds as appropriate. The wideband Delphi Technique has subsequently been used in a number of studies and cost estimation activities. It has been highly successful in combining the free discuss advantages of the group meeting technique and advantage of anonymous estimation of the standard Delphi Technique. The advantages of this method are:. The experts can factor in differences between past project experience and requirements of the proposed project.

The experts can factor in project impacts caused by new technologies, architectures, applications and languages involved in the future project and can also factor in exceptional personnel characteristics and interactions, etc. The disadvantages include:. This method can not be quantified.

It is hard to document the factors used by the experts or experts-group. Expert may be some biased, optimistic, and pessimistic, even though they have been decreased by the group consensus. The expert judgment method always compliments the other cost estimating methods such as algorithmic method. Estimating by analogy means comparing the proposed project to previously completed similar project where the project development information id known. Actual data from the completed projects are extrapolated to estimate the proposed project.

This method can be used either at system-level or at the component-level. Estimating by analogy is relatively straightforward.

Actually in some respects, it is a systematic form of expert judgment since experts often search for analogous situations so as to inform their opinion. The steps using estimating by analogy are:. Characterizing the proposed project. Selecting the most similar completed projects whose characteristics have been stored in the historical data base. Deriving the estimate for the proposed project from the most similar completed projects by analogy. The main advantages of this method are:.

The estimation are based on actual project characteristic data. The estimator's past experience and knowledge can be used which is not easy to be quantified. The differences between the completed and the proposed project can be identified and impacts estimated. However there are also some problems with this method,. Using this method, we have to determine how best to describe projects. The choice of variables must be restricted to information that is available at the point that the prediction required. Possibilities include the type of application domain, the number of inputs, the number of distinct entities referenced, the number of screens and so forth.

Even once we have characterized the project, we have to determine the similarity and how much confidence can we place in the analogies. Too few analogies might lead to maverick projects being used; too many might lead to the dilution of the effect of the closest analogies. Martin Shepperd etc. Introduced the method of finding the analogies by measuring Euclidean distance in n-dimensional space where each dimension corresponds to a variable. Values are standardized so that each dimension contributes equal weight to the process of finding analogies. Generally speaking, two analogies are the most effective.

Finally, we have to derive an estimate for the new project by using known effort values from the analogous projects. Possibilities include means and weighted means which will give more influence to the closer analogies. It has been estimated that estimating by analogy is superior technique to estimation via algorithmic model in at least some circumstances. It is a more intuitive method so it is easier to understand the reasoning behind a particular prediction. Top-down estimating method is also called Macro Model. Using top-down estimating method, an overall cost estimation for the project is derived from the global properties of the software project, and then the project is partitioned into various low-level components.

The leading method using this approach is Putnam model. This method is more applicable to early cost estimation when only global properties are known. In the early phase of the software development, It is very useful because there are no detailed information available. The advantages of this method are:.

It focuses on system-level activities such as integration, documentation, configuration management, etc., many of which may be ignored in other estimating methods and it will not miss the cost of system-level functions. It requires minimal project detail, and it is usually faster, easier to implement. The disadvantages are:. It often does not identify difficult low-level problems that are likely to escalate costs and sometime tends to overlook low-level components.

It provides no detailed basis for justifying decisions or estimates. Because it provides a global view of the software project, it usually embodies some effective features such as cost-time trade off capability that exists in Putnam model. Using bottom-up estimating method, the cost of each software components is estimated and then combine the results to arrive at an estimated cost of overall project. It aims at constructing the estimate of a system from the knowledge accumulated about the small software components and their interactions. The leading method using this approach is COCOMO's detailed model.

The advantages:. It permits the software group to handle an estimate in an almost traditional fashion and to handle estimate components for which the group has a feel. It is more stable because the estimation errors in the various components have a chance to balance out. The disadvantages:.

It may overlook many of the system-level costs (integration, configuration management, quality assurance, etc.) associated with software development. It may be inaccurate because the necessary information may not available in the early phase. It tends to be more time-consuming. It may not be feasible when either time and personnel are limited. The algorithmic method is designed to provide some mathematical equations to perform software estimation. These mathematical equations are based on research and historical data and use inputs such as Source Lines of Code (SLOC), number of functions to perform, and other cost drivers such as language, design methodology, skill-levels, risk assessments, etc.

The algorithmic methods have been largely studied and there are a lot of models have been developed, such as, and function points based models. General advantages:. It is able to generate repeatable estimations.

It is easy to modify input data, refine and customize formulas. It is efficient and able to support a family of estimations or a sensitivity analysis. It is objectively calibrated to previous experience. General disadvantages:.It is unable to deal with exceptional conditions, such as exceptional personnel in any software cost estimating exercises, exceptional teamwork, and an exceptional match between skill-levels and tasks.

Poor sizing inputs and inaccurate cost driver rating will result in inaccurate estimation. Some experience and factors can not be easily quantified. One very widely used algorithmic software cost model is the Constructive Cost Model (COCOMO). The has a very simple form: MAN-MONTHS = K1.

(Thousands of Delivered Source Instructions) K2 Where K1 and K2 are two parameters dependent on the application and development environment. Estimates from the basic COCOMO model can be made more accurate by taking into account other factors concerning the required characteristics of the software to be developed, the qualification and experience of the development team, and the software development environment. Some of these factors are: Complexity of the software.

Required reliability. Size of data base. Required efficiency (memory and execution time).

Analyst and programmer capability. Experience of team in the application area.

Experience of team with the programming language and computer. Use of tools and software engineering practices Many of these factors affect the person months required by an order of magnitude or more.

COCOMO assumes that the system and software requirements have already been defined, and that these requirements are stable. This is often not the case. COCOMO model is a regression model. It is based on the analysis of 63 selected projects.

The primary input is KDSI. The problems are:. In early phase of system life-cycle, the size is estimated with great uncertainty value. So, the accurate cost estimate can not be arrived. The cost estimation equation is derived from the analysis of 63 selected projects. It usually have some problems outside of its particular environment. For this reason, the recalibration is necessary.

According to Kemerer's research, the average error for all versions of the model is 601%. The detailed model and Intermediate model seem not much better than basic model. The first version of COCOMO model was originally developed in 1981.

Now, it has been experiencing increasing difficulties in estimating the cost of software developed to new life cycle processes and capabilities including rapid-development process model, reuse-driven approaches, object-oriented approaches and software process maturity initiative. For these reasons, The newest version, was developed. The major new modeling capabilities of are a tailorable family of software size models, involving object points, function points and source lines of code; nonlinear models for software reuse and reengineering; an exponent-driver approach for modeling relative software diseconomies of scale; and several additions, deletions, and updates to previous COCOMO effort-multiplier cost drivers. This new model is also serving as a framework for an extensive current data collection and analysis effort to further refine and calibrate the model's estimation capabilities. Another popular software cost model is the Putnam model. The form of this model is: Technical constant C= size. B 1/3.

T 4/3 Total Person Months B= 1/T 4. (size/C) 3 T= Required Development Time in years Size is estimated in LOC Where: C is a parameter dependent on the development environment and It is determined on the basis of historical data of the past projects. Rating: C=2,000 (poor), C=8000 (good) C=12,000 (excellent). The Putnam model is very sensitive to the development time: decreasing the development time can greatly increase the person-months needed for development. One significant problem with the PUTNAM model is that it is based on knowing, or being able to estimate accurately, the size (in lines of code) of the software to be developed.

There is often great uncertainty in the software size. It may result in the inaccuracy of cost estimation. According to Kemerer's research, the error percentage of SLIM, a Putnam model based method,is 772.87%.

From above two algorithmic models, we found they require the estimators to estimate the number of SLOC in order to get man-months and duration estimates. The Function Point Analysis is another method of quantifying the size and complexity of a software system in terms of the functions that the systems delivers to the user. A number of proprietary models for cost estimation have adopted a function point type of approach, such as and. The function point measurement method was developed by Allan Albrecht at IBM and published in 1979. He believes function points offer several significant advantages over SLOC counts of size measurement. There are two steps in counting function points:.

Counting the user functions. The raw function counts are arrived at by considering a linear combination of five basic software components: external inputs, external outputs, external inquiries, logic internal files, and external interfaces, each at one of three complexity levels: simple, average or complex.The sum of these numbers, weighted according to the complexity level, is the number of function counts (FC).

Adjusting for environmental processing complexity. The final function points is arrived at by multiplying FC by an adjustment factor that is determined by considering 14 aspects of processing complexity. This adjustment factor allows the FC to be modified by at most 35% or -35%. The collection of function point data has two primary motivations. One is the desire by managers to monitor levels of productivity.

Another use of it is in the estimation of software development cost. There are some cost estimation methods which are based on a function point type of measurement, such as ESTIMACS and SPQR/20. SPQR/20 is based on a modified function point method. Whereas traditional function point analysis is based on evaluating 14 factors, SPQR/20 separates complexity into three categories: complexity of algorithms, complexity of code, and complexity of data structures. ESTIMACS is a propriety system designed to give development cost estimate at the conception stage of a project and it contains a module which estimates function point as a primary input for estimating cost. The advantages of function point analysis based model are:. function points can be estimated from requirements specifications or design specifications, thus making it possible to estimate development cost in the early phases of development.

function points are independent of the language, tools, or methodologies used for implementation. non-technical users have a better understanding of what function points are measuring since function points are based on the system user's external view of the system From Kemerer's research, the mean error percentage of ESTIMACS is only 85.48%. So, considering the 601% with COCOMO and 771% with SLIM, I think the Function Point based cost estimation methods is the better approach especially in the early phases of development. From the above comparison, we know no one method is necessarily better or worse than the other, in fact, their strengths and weaknesses are often complimentary to each other. According to the experience, it is recommended that a combination of models and analogy or expert judgment estimation methods is useful to get reliable, accurate cost estimation for software development.

For known projects and projects parts, we should use expert judgment method or analogy method if the similarities of them can be got, since it is fast and under these circumstance, reliable; For large, lesser known projects, it is better to use algorithmic model. In this case, many researchers recommend the estimation models that do not required SLOC as an input. I think is the first candidate because COCOMO2.0 model not only can use Source lines of code (SLOC) but also can use Object points, unadjusted function points as metrics for sizing a project. If we approach cost estimation by parts, we may use expert judgment for some known parts. This way we can take advantage of both: the rigor of models and the speed of expert judgment or analogy. Because the advantages and disadvantages of each technique are complementary, a combination will reduce the negative effect of any one technique, augment their individual strengths and help to cross-check one method against another.

It is very common that we apply some cost estimation methods to estimate the cost of software development. But what we have to note is that it is very important to continually re-estimate cost and to compare targets against actual expenditure at each major milestone. This keeps the status of the project visible and helps to identify necessary corrections to budget and schedule as soon as they occur.

At every estimation and re-estimation point, iteration is an important tool to improve estimation quality. The estimator can use several estimation techniques and check whether their estimates converge. The other advantages are as following:. Different estimation methods may use different data. This results in better coverage of the knowledge base for the estimation process. It can help to identify cost components that cannot be dealt with or were overlooked in one of the methods.

Different viewpoints and biases can be taken into account and reconciled. A competitive contract bid, a high business priority to keep costs down, or a small market window with the resulting tight deadlines tends to have optimistic estimates. A production schedule established by the developers is usually more on the pessimistic side to avoid committing to a schedule and budget one cannot meet. It is also very important to compare actual cost and time to the estimates even if only one or two techniques are used.

It will also provide the necessary feedback to improve the estimation quality in the future. Generally, the historical data base for cost estimation should be set up for future use. Identifying the goals of the estimation process is very important because it will influence the effort spent in estimating, its accuracy, and the models used. Tight schedules with high risks require more accurate estimates than loosely defined projects with a relatively open-ended schedule. The estimators should look at the quality of the data upon which estimates are based and at the various objectives. The act of calibration standardizes a model. Many model are developed for specific situations and are, by definition, calibrated to that situation.

Such models usually are not useful outside of their particular environment. So, the act of calibration is needed to increase the accuracy of one of these general models by making it temporarily a specific model for whatever product it has been calibrated for.

Calibration is in a sense customizing a generic model. Items which can be calibrated in a model include: product types, operating environments, labor rates and factors, various relationships between functional cost items, and even the method of accounting used by a contractor. All general models should be standardized (i.e. Calibrated), unless used by an experienced modeler with the appropriate education, skills and tools, and experience in the technology being modeled.

Quantum qhm 7468-2v gamepad

Calibration is the process of determining the deviation from a standard in order to compute the correction factors. For cost estimating models, the standard is considered historical actual costs. The calibration procedure is theoretically very simple. It is simply running the model with normal inputs (known parameters such as software lines of code) against items for which the actual cost are known. These estimates are then compared with the actual costs and the average deviation becomes a correction factor for the model.

In essence, the calibration factor obtained is really good only for the type of inputs that were used in the calibration runs. For a general total model calibration, a wide range of components with actual costs need to be used. Better yet, numerous calibrations should be performed with different types of components in order to obtain a set of calibration factors for the various possible expected estimating situations. The accurate prediction of software development costs is a critical issue to make the good management decisions and accurately determining how much effort and time a project required for both project managers as well as system analysts and developers. There are many software cost estimation methods available including algorithmic methods, estimating by analogy, expert judgment method, top-down method, and bottom-up method.

No one method is necessarily better or worse than the other, in fact, their strengths and weaknesses are often complimentary to each other. To understand their strengths and weaknesses is very important when you want to estimate your projects. For a specific project to be estimated, which estimation methods should be used depend on the environment of the project. According to the weaknesses and strengths of the methods, you can choose some methods to be used. I think a combination of the expert judgment or analogy method and COCOMO2.0 is the best approach that you can choose. For known projects and projects parts, we should use expert judgment method or analogy method if the similarities of them can be got, since it is fast and under these circumstance, reliable; For large, lesser known projects, it is better to use algorithmic model like COCOMO2.0 which will be available in early 1997.

If COCOMO2.0 is not available, ESTIMACS or the other function point based methods are highly recommended especially in the early phase of the software life-cycle because in the early phase of software life-cycle SLOC based methods have great uncertainty values of size. If there are many great uncertainty values of size, reuse, cost drivers etc., the analogous method or wide-band Delphi technology should be considered as the first candidate. And, the COCOMO 2.0 has capabilities to deal with the current software process and is served as a framework for an extensive current data collection and analysis effort to further refine and calibrate the model's estimation capabilities. In general, the COCOMO2.0 will be very popular.

Barry Boehm and his students are developing COCOMO2.0. They expect to have it calibrated and usable in early 1997. Some recommendations:. Do not depend on a single cost or schedule estimate.

Use several estimating techniques or cost models, compare the results, and determine the reasons for any large variations. Document the assumptions made when making the estimates. Monitor the project to detect when assumptions that turn out to be wrong jeopardize the accuracy of the estimate. Improve software process: An effective software process can be used to increase accuracy in cost estimation in a number of ways. Maintaining a historical database.

Bernard L. ' Cost Estimation For Software Development', AddisionWesley, 1987. Boehm, B.W. ' Software Engineering Economics', PrenticeHall, 1981. Shepperd,M.

' Effort Estimation Using Analogy', IEEE,1996. Kemerer, C.F. ' An Empirical Validation of Software Cost Estimation Models', CACM, May 1987. Albrechet, A.J. ' Software Function, Source Lines of Code, and Development Effort Prediction: A Software Science Validation', IEEE on Software Engineering, NOV 1983.

Albert L. Lederer and Jayesh Prasad ' Nine Management Guidelines for Better Cost Estimating', CACM,Vol.35,No.2, Feb 1992. Boehm, B.W. '. Shaw, M L.G. ' '.

SoftStar System Co.

Cost Estimation INTRODUCTION Managers are supposed to plan. Planning includes budgeting. IS budgeting includes software development and acquisition costs. Can these costs be budgeted, or shall we say 'predicted'?

Over the years many have attempted to determine a priori what the cost of a developing a specific application will be. Why has it been so important? Not only the budget is on the line, but many times a manager's job or reputation as well. The make or buy decision must be made. What cost is this that we are trying to estimate, determine or 'predict'?

We know that the cost of developing software, up until the point that it is accepted, is only a fraction of the total cost of the system over the typical life cycle of the product. However, for the purpose of this study, we will exclude the maintenance costs, and will speak only of the development costs up until acceptance.

This position is consistent with that taken by those having done research in this field. We will first review and discuss the most main published methods (lines of code, function points, and objects), and some basic terminology relating them, followed by a discussion of current trends, and finally the implications of these trends for software cost estimation. PUBLISHED TECHNIQUES We will look at three basic researched methodologies for a priori software cost estimation: lines of code, functions, and objects. For each we will describe the methodology used, with its accompanying advantages and disadvantages. We must note that, thus far, all researched models have approached cost estimation through estimation of effort (generally man-months) involved in the project. LINES OF CODE This general approach is actually subdivided into two different areas: SLOC ( Source Lines of Code, and SDI ( Source Delivered Instructions). The difference between these two is that the first, SLOC, takes into account all the housekeeping which must be done by the developer, such as headers and embedded comments.

The second, SDI, only takes into account the number of executable lines of code. The best known technique using LOC ( Lines of Code) is the COCOMO (COnstructive COst MOdel), developed by Boehm.

This model, along with other SLOC/SDI based models, uses not only the LOC, but also other factors such as product attributes, hardware limitations, personnel, and development environment. These different factors lead to one or more 'adjustment' factors which adjust the direct evaluation of the effort needed. In COCOMO's case, there are fourteen such factors derived by Boehm. This model shows a linear relation between the LOC and the cost. Another model for this category (LOC) is the Putnam Estimation Model. This model includes more variables, and is non-linear in nature.

The estimation is affected not only by the SDI, but also by the software development environment and desired development time. Other models using LOC are BYL (Before You Leap) by the Gordon Group, WICOMO (Wang Institute Cost Model), DECPlan (Digital Equipment) and SLIM (Application of Putnam Estimation Model). FUNCTIONS Cost estimation based on expected functionality of the system was first proposed by Albrecht in 1979, and has since been researched by several people. This cost estimation relies on function points, and requires the identification of all occurrences of five unique function types: External Inputs, External Outputs, Logical Internal Files, External Interfaces, and Queries. The sum of all occurrences is called RAW-FUNCTION-COUNTS ( FC).

This value must be modified by a weighted rating of Complexity Factors, giving a Technical Complexity Factor ( TCF). The Function Points are equivalent to FC.TCF for any given project. This technique has been evaluated by several authors, and some attempts have been made at refining the model. These estimations have proven 'more successful' than the original model at estimating cost a priori. Overall, the function-points models appear to more accurately predict the effort needed for a specific project than LOC-based models. OBJECTS Cost estimation based on objects has recently been introduced, given the ascendancy of Object-Oriented-Programming (OOP) and Object-Oriented CASE tools. The basic is similar to function-based cost estimation, yet, as the name implies counts objects, and not functions.

Research until now has been very limited, and has not shown any improvement in reliability over function-based methods. TRENDS What are current trends in software cost estimation? What changes in systems development affect software cost estimation. We will examine the major changes which have been taking place in recent times. USE OF SLOC/SDI In the past few years, the practitioners trend has been to get away from SLOC and SDI, and to work based on function points.

The reasoning for this is that function points are more 'independent' (they are less dependent on the language and the programming environment) than SLOC and SDI. PROTOTYPING In recent years prototyping has become a major component of many systems developments efforts. Boehm and Papaccio's spiral development model is in essence a prototyping model in which a system is developed in phases, with requirements specifications, cost to completion, and the risk evaluated at each step. CASE TOOLS AND PROGRAM GENERATORS In the last few years, CASE tools and program generators have developed to the point that some companies are no longer 'programming' in the traditional sense of the word. They are in essence just doing an in depth analysis, which, when it is complete, gives them a working system.

Along the way, they may generate the system many times to test it, using the system as a prototype development platform. IN-HOUSE METRICS DEVELOPMENT Today, most major systems developers and consultants, have a methodology to determine the a priori cost of a software development project. This methodology is proprietary, and we can only be aware of the externals of it. The cost estimation methodology is linked to a specific systems analysis and design methodology. This cost estimation is based on the use of the analysis methodology and the experience of the firm.

PROBLEMS AND EVALUATION Given the differing methodologies and current trends in software development, what research can and/or should be done? In order to see this, let's look at the overall situation, with an evaluation of the problems and advantages each cost-estimation methodology.

It is apparent that there is room, and even desire, for improved metrics. It is clear that there is no perfect way of a priori cost estimation, but there are ways which may be acceptable. In order to evaluate the three methods outlined, we must fully understand the problems each presents. LINES OF CODE This, the oldest of the models, is probably not going to generate much in the way of new research. Current trends in which software development is going to prototyping, CASE tools, and 4GLs, make the use of LOC much less stable. In order to get a model which suits the environment, there must be many projects of different types and sizes in a stable environment.

Umesh Kumar

This is generally no longer the case, as fewer and fewer organizations have significant numbers of new applications 'written' entirely by programmers. Even if the number of projects exist, the calibration is not easy, due to the differing capacity of programmers and environments. FUNCTION POINTS This widely used technique has calibration problems, just as the LOC models do.

However, the calibration problems seem to be simpler, and easier to define. One factor which accounts for the ease of calibration is that since function points are independent of the programming environment, it is possible to use data gathered at other sites, as is currently being done by Software Productivity Research, Inc. DREG89 In the past, many people have used function points to then determine the LOC, and then have done the cost estimation using the LOC. This methodology is incorrect, in that it adds one more error factor into the equation.

If function points are the only independent variable to estimate lines of code, LOC are not needed. OBJECT POINTS This newest methodology is too new to really be evaluated empirically. Matter of fact, the one paper available, published by Banker, Kauffman and Kumar BANK91 gives data and correlations which until now I have been unable to verify. Either the data presented in Table 6 are incorrect, or the explanation of the variables given is such that I have been unable to fit the data to the model given. The authors noted that there were significant variances across time, as software developers became more and more familiar with the CASE development environment which was being evaluated. The concept of using objects to estimate cost is tantalizing in its simplicity, but has yet to demonstrate viability in the long-haul.

It has been tested in only one case so far. However, any model which shows promise in spite of the significant variability introduced by a new software development technique should be evaluated more. OVERALL PROBLEMS It is clear that at the current time no well-known model is available to practitioners who desire to put one into practice. At the same time, we can see that different companies, such as Anderson Consulting, offer cost estimation tools to their customers, and are highly 'successful' at what they do. From my experience and that of all practitioners who have attempted cost estimation, we note that cost estimation is a very difficult item, much subject to the variability in human beings. We must realize that in psychological research any model which can explain even 50% of the variance in behavior is highly regarded. Should we consider that human behavior is a large factor in the software development process, and therefore in the cost estimation?

Where are there successful models being built? It is in organizations which have a large number of applications development projects and have a very structured methodology for software development. I have been unable to find any published cost estimation methodology that has been shown to explain more than 70% of the variance across different organizations.

FUTURE RESEARCH In what research has been done, and in practice, no cost estimation principle is extremely predictive without a given methodology. It is therefore necessary to attempt to study a given cost estimation technique in relation to a given methodology to attempt to develop an empirical model which would have a higher explanatory power than that of current models. In the paper by Banker, Kauffman, and Kumar, it was made obvious that not only must the cost estimation technique be stable, but also the development tools must be stable. It is very difficult to develop a model which depends on what year in the cycle of development techniques. There is currently an ongoing project by Software Productivity Research, Inc. To gather a set of over 10,000 varied projects using function point analysis DREG89, p. This project, if completed, promises to be the first major empirical study on cost estimation across multiple development platforms, and multiple development techniques.

In the bibliographic search conducted, no reports of the conclusions of this study have been reported. CONCLUSIONS AND IMPLICATIONS FOR PROJECT MANAGERS While software cost prediction models are still in relative infancy, it is clear that each manager must be able to prepare a budget for the project. Of the techniques presented in this paper, the function points analysis technique is the most robust. This is not to say that it must be used to the exclusion of other techniques, but that it is the technique for which the largest body of empirical research has been conducted. Object points is a promising technique in object-oriented CASE environments, but has much to be studied, and SLOC models are becoming outdated due to new methodologies. Is there a 'best' technique? Yes, whatever works in the given environment.

With careful calibration for a given environment it is possible for the manager to develop a cost estimation model which closely relates to the environment. This is not without effort and much time, but can be financially rewarding, as well as providing peace of mind for the manager. BIBLIOGRAPHY Albrecht, A. 'Measuring Application Development Productivity.

In Proceedings of the IBM Applications Development Symposium. GUIDE/SHARE (Monterey, CA, Oct. Bailey, John W.

And Basili, Victor R. 'A Meta-Model for Software Development Resource Expenditures.' Boehm, Barry W. And Papaccio, Philip N. 'Understanding and Controlling Software Costs.' Cuelenaere, A.

E., van Genuchten, M. And Heemstra, F. 'Calibrating a Software Cost Estimation Model: Why and How' in Information and Software Technology, v. Function Point Analysis. Englewood Cliffs, NJ: Prentice Hall, 1989. Banker, R., Kauffman, W., and R.

'An Empirical Test of Object-Based Output Measurement Metrics in a Computer Aided Software Engineering (CASE) Environment.' Unpublished manuscript. Kemerer, Chris F. 'An Empirical Validation of Software Cost Estimation Models.' Communications of the ACM, 30: 416-429.

Mendelson, Haim. The Economics of Information Systems Management. Unpublished manuscript, 1989.

Miyazaki, Y., Takanou, A., and Nozaki, H. 'Method to estimate parametere values in software prediction models' in Information and Software Technology, v.

33, April 1991, pp. Symons, Charles R. 'Function Point Analysis.'