Cost Estimation Techniques in Computing:A Review
Deepanker Choudhary* B.tech. Student, Department of Computer Sciences,
Manav Bharti University, Solan, H.P.
E-mail: deepanker06@yahoo.com
Assistant Professor, Department of Mathematics,
Manav Bharti University, Solan, H.P
E-mail: vinod.k4bais@gmail.com
Abstract: This Cost estimation of software projects or models with high accuracy at the conceptual bases programming is planning and obtained feasible solution. However, a number of difficulties arise when conducting cost estimation during the conceptual phase. Major problems faced are lack of preliminary information, lack of database of regarding works costs, data missingness, and lack of an appropriate cost estimation method. This paper focuses on the focus on a more accurate estimation technique for Software projects or models in India at the conceptual phase using various cost estimation methods.
Keywords: Cost estimation, Computer software, Cost model,PERT etc.
1.Introduction
Several indicators should be considered to estimate the software cost and effort. One of the most important indicators which should be noticed is the size of the project. The estimation of effort and cost depends on the accurate prediction of the size. Generally, the effort and cost estimations are difficult in the software projects. The reason is that software projects are often not unique and there is no background or previous experience about them. Therefore, prediction seems complicated.
At the initial stage of a project, there is high uncertainty about these project attributes. The estimate produced at this stage is inevitably inaccurate, as the accuracy depends highly on the
amount of reliable information available to the estimator. As we learn more about the project during analysis and later design stages, the uncertainties are reduced and more accurate estimates can be made. Most models produce exact results without regard to this uncertainty. They need to be enhanced to produce a range of estimates and their probabilities.
To improve the algorithmic models, there is a great need for the industry to collect project data on a wider scale. The recent effort of ISBSG is a step in the right direction [2].
With new types of applications, new development paradigms and new development tools, cost estimators are facing great challenges in applying known estimation models in the new
millenium. Historical data may prove to be irrelevant for the future projects. The search for reliable, accurate and low cost estimation methods must continue. Several areas are in need of immediate attention. For example, we need models for development based on formal methods, or iterative software process. Also, more studies are needed to improve the accuracy of cost estimate for maintenance projects. [1]
Cost estimation is an important part of the any project. Costs are assumed to fall under project or modeling costs, if the study involves project or modeling performance assessment: i. Designer cost, ii. Programmer cost, iii. Labor cost, iv. Laboratory cost, vi. Subject matter expert cost, vii. User needs assessment cost, viii. Concept studies cost, ix. Prototype and usability assessment costs, x. Modeling and fast-time simulations cost, xi. Real-time human-in-the-loop simulation cost, xii. Experiment/study plan development cost, xiii. Scenario development cost, xiv. Scenario shakedown cost, xv. Final simulation cost, xvi. Data collection cost, xvii. Data analysis cost, and xviii. Final report development cost. Generally, any time project’s data is collected or estimated in a study, experiment, or test, all of the costs affiliated with that data should be considered as a part of project cost.
For example, in the top-down planning approach, the cost estimate is used to derive the project plan:
1. The project manager develops a characterization of the overall functionality, size, process, environment, people, and quality required for the project.
2. A macro-level estimate of the total effort and schedule is developed using a software cost estimation model.
3. The project manager partitions the effort estimate into a top-level work breakdown structure. He also partitions the schedule into major milestone dates and determines a staffing profile, which together forms a project plan.
The actual cost estimation process involves seven steps:
1. Establish cost-estimating objectives
2. Generate a project plan for required data and resources
3. Pin down software requirements
4. Work out as much detail about the software system as feasible
5. Use several independent cost estimation techniques to capitalize on their combined strengths
6. Compare different estimates and iterate the estimation process
7. After the project has started, monitor its actual cost and progress, and feedback results to project management
No matter which estimation model is selected, users must pay attention to the following to get best results:
a) Coverage of the estimate (some models generate effort for the full life-cycle, while others do not include effort for the requirement stage)
b) Calibration and assumptions of the model
c) Sensitivity of the estimates to the different model parameters
d) Deviation of the estimate with respect to the actual cost
The exact LOC can only be obtained after the project has completed. Estimating the code size of a program before it is actually built is almost as hard as estimating the cost of the program.
A typical method for estimating the code size is to use experts' judgment together with a technique called PERT. It involves experts' judgment of three possible code-sizes:
S_{L}= the lowest possible size;
S_{H}= the highest possible size; and
S_{M}= the most likely size.
The estimate of the code-size S is computed as:
S=(S_{L}+S_{H}+4S_{M })/6
PERT can also be used for individual components to obtain an estimate of the software system by summing up the estimates of all the components.
3. Non-algorithmic Methods
This method requires one or more completed projects that are similar to the new project and derives the estimation through reasoning by analogy using the actual costs of previous projects. Estimation by analogy can be done either at the total project level or at subsystem level. The total project level has the advantage that all cost components of the system will be considered while the subsystem level has the advantage of providing a more detailed assessment of the similarities and differences between the new project and the completed projects. The strength of this method is that the estimate is based on actual project experience.
However, it is not clear to what extend the previous project is actually representative of the constraints, environment and functions to be performed by the new system. Positive results and a definition of project similarity in term of features were reported in [4].
This method involves consulting one or more experts. The experts provide estimates using their own methods and experience. Expert-consensus mechanisms such as Delphi technique or PERT will be used to resolve the inconsistencies in the estimates.
The Delphi technique works as follows:
1) The coordinator presents each expert with a specification and a form to record estimates.
2) Each expert fills in the form individually (without discussing with others) and is allowed to ask the coordinator questions.
3) The coordinator prepares a summary of all estimates from the experts (including mean or median) on a form requesting another iteration of the experts’ estimates and the rationale
4) Repeat steps 2)-3) as many rounds as appropriate.
A modification of the Delphi technique proposed by Boehm and Fahquhar [5] seems to be more effective: Before the estimation, a group meeting involving the coordinator and experts is arranged to discuss the estimation issues. In step 3), the experts do not need to give any rationale for the estimates. Instead, after each round of estimation, the coordinator calls a meeting to have experts discussing those points where their estimates varied widely.
Using Parkinson's principle “work expands to fill the available volume” [6], the cost is determined (not estimated) by the available resources rather than based on an objective assessment. If the software has to be delivered in 12 months and 5 people are available, the effort is estimated to be 60 person-months. Although it sometimes gives good estimation, this method is not recommended as it may provide very unrealistic estimates. Also, this method does not
promote good software engineering practice.
The software cost is estimated to be the best price to win the project. The estimation is based on the customer's budget instead of the software functionality. For example, if a reasonable estimation for a project costs 100 person-months but the customer can only afford 60
person-months, it is common that the estimator is asked to modify the estimation to fit 60 personmonths’ effort in order to win the project. This is again not a good practice since it is very likely to cause a bad delay of delivery or force the development team to work overtime.
In this approach, each component of the software system is separately estimated and the results aggregated to produce an estimate for the overall system. The requirement for this approach is that an initial design must be in place that indicates how the system is decomposed into different components.
3.6 Top-down: This approach is the opposite of the bottom-up method. An overall cost estimate for the system is derived from global properties, using either algorithmic or non-algorithmic methods. The total cost can then be split up among the various components. This approach is more suitable for cost estimation at the early stage.
The algorithmic methods are based on mathematical models that produce cost estimate as a function of a number of variables, which are considered to be the major cost factors. Any algorithmic model has the form:
Effort = f(x_{1}, x_{2}… x_{n})
Where {x_{1}, x_{2}… x_{n}} denote the cost factors. The existing algorithmic methods differ in two aspects: the selection of cost factors, and the form of the function f. We will first discuss the cost factors used in these models, and then characterize the models according to the form of the functions and whether the models are analytical or empirical.
Linear models have the form:
Effort = a_{0}+∑a_{i}x_{i}; (i=1 to n)
where the coefficients a_{1}, …, a_{n} are chosen to best fit the completed project data. The work of Nelson belongs to this type of models [8]. We agree with Boehm's comment that "there are too many nonlinear interactions in software development for a linear model to work well.
4.2 Multiplicative models
Multiplicative models have the form:
Effort =a_{0}П a_{i}^{xi} ;(i =1 to n)
Again the coefficients a_{1}, …, a_{n} are chosen to best fit the completed project data. Walston-Felix [9] used this type of model with each xi taking on only three possible values: -1, 0, +1. Doty model [8] also belongs to this class with each xi taking on only two possible values: 0, +1. These two models seem to be too restrictive on the cost factor values.
This model has been proposed by Putman according to manpower distribution and the examination of many software projects (Kemerer,2008). The main equation for Putnam’s model is: [IMG]file:///C:%5CUsers%5Cuser%5CAppData%5CLocal%5CTemp%5Cmsohtmlclip1%5C01%5Cclip_image002.gif[/IMG]……………a*
where, E is the environment indicator and demonstrates the environment ability. td is the time of delivery. Effort and S are expressed by person-year and line of code respectively. Putnam presented another formula for Effort as follows: [IMG]file:///C:%5CUsers%5Cuser%5CAppData%5CLocal%5CTemp%5Cmsohtmlclip1%5C01%5Cclip_image004.gif[/IMG]…………………b* where, D0 , the manpower build-up factor, varies from 8(new software) to 27(rebuilt software). By combining equations a* and b*, the final equation is obtained as: [IMG]file:///C:%5CUsers%5Cuser%5CAppData%5CLocal%5CTemp%5Cmsohtmlclip1%5C01%5Cclip_image006.gif[/IMG] SLIM (Software Life Cycle Management) is a tool that acts according to the Putnam’s model.
4.4 Model calibration using linear regression
A direct application of the above models does not take local circumstances into consideration.
However, one can adjust the cost factors using the local data and linear regression method. We illustrate this model calibration using the general power function model:
Take logarithm of both sides and let Y = log (Effort),
A = log (a) and X = log(S). The formula is transformed into a linear equation:
Applying the standard least square method to a set of previous project data {Yi, Xi: i =1… k}, we obtain the required parameters b and A (and thus a) for the power function.
Discrete models have a tabular form, which usually relates the effort, duration, difficulty and other cost factors. This class of models contains as Aron model , Wolverton model , and
Boeing model. These models gained some popularity in the early days of cost estimation, as they were easy to use.
Today, almost no model can estimate the cost of software with a high degree of accuracy. This state of the practice is created because
(1) there are a large number of interrelated factors that influence the software development process of a given development team and a large number of project attributes, such as number of user screens, volatility of system requirements and the use of reusable software
(2) The development environment is evolving continuously.
(3) The lack of measurement that truly reflects the complexity of a software system.
To produce a better estimate, we must improve our understanding of these project attributes and their causal relationships, model the impact of evolving environment, and develop effective ways of measuring software complexity.
Historical data may prove to be irrelevant for the future projects. The search for reliable, accurate and low cost estimation methods must continue. Several areas are in need of immediate attention. For example, we need models for development based on formal methods, or iterative software process. Also, more studies are needed to improve the accuracy of cost estimate for maintenance projects.
References
[1] Hareton Leung, Zhang Fan,” Software Cost Estimation”The Hong Kong Polytechnic
[2] ISBSG, International software benchmarking standards group, http://www.isbsg.org/au..
[3] N. E. Fenton and S. L. Pfleeger, Software Metrics: A Rigorous and Practical Approach, PWS Publishing Company, 1997
[4] M. Shepperd and C. Schofield, “Estimating software project effort using analogy”, IEEE Trans. Soft. Eng. SE-23:12, 1997, pp. 736-743
[5] B. W. Boehm, Software engineering economics, Englewood Cliffs, NJ: Prentice-Hall, 1981.
[6] G.N. Parkinson, Parkinson's Law and Other Studies in Administration, Houghton-Miffin,Boston, 1957
[7] J. R. Herd, J.N. Postak, W.E. Russell and K.R. Steward, "Software cost estimation Study results", Final Technical Report, RADC-TR77-220, Vol. I, Doty Associates, Inc.,Rockville, MD, 1977
[8] R. Nelson, Management HandBook for the Estimation of Computer Programming Costs, ADA648750, Systems Development Corp., 1966.
[9] C. E. Walston and C. P. Felix, “A method of programming measurement and estimation”, IBM Systems Journal, vol. 16, no. 1, 1977, pp. 54-73.
Cost Estimation Techniques in Computing:A Review
Deepanker Choudhary*
B.tech. Student, Department of Computer Sciences,
Manav Bharti University, Solan, H.P.
E-mail: deepanker06@yahoo.com
Vinod Kumar Bais
Assistant Professor, Department of Mathematics,
Manav Bharti University, Solan, H.P
E-mail: vinod.k4bais@gmail.com
Abstract: This Cost estimation of software projects or models with high accuracy at the conceptual bases programming is planning and obtained feasible solution. However, a number of difficulties arise when conducting cost estimation during the conceptual phase. Major problems faced are lack of preliminary information, lack of database of regarding works costs, data missingness, and lack of an appropriate cost estimation method. This paper focuses on the focus on a more accurate estimation technique for Software projects or models in India at the conceptual phase using various cost estimation methods.
Keywords: Cost estimation, Computer software, Cost model,PERT etc.
1.Introduction
Several indicators should be considered to estimate the software cost and effort. One of the most important indicators which should be noticed is the size of the project. The estimation of effort and cost depends on the accurate prediction of the size. Generally, the effort and cost estimations are difficult in the software projects. The reason is that software projects are often not unique and there is no background or previous experience about them. Therefore, prediction seems complicated.
At the initial stage of a project, there is high uncertainty about these project attributes. The estimate produced at this stage is inevitably inaccurate, as the accuracy depends highly on the
amount of reliable information available to the estimator. As we learn more about the project during analysis and later design stages, the uncertainties are reduced and more accurate estimates can be made. Most models produce exact results without regard to this uncertainty. They need to be enhanced to produce a range of estimates and their probabilities.
To improve the algorithmic models, there is a great need for the industry to collect project data on a wider scale. The recent effort of ISBSG is a step in the right direction [2].
With new types of applications, new development paradigms and new development tools, cost estimators are facing great challenges in applying known estimation models in the new
millenium. Historical data may prove to be irrelevant for the future projects. The search for reliable, accurate and low cost estimation methods must continue. Several areas are in need of immediate attention. For example, we need models for development based on formal methods, or iterative software process. Also, more studies are needed to improve the accuracy of cost estimate for maintenance projects. [1]
- METHOD OF ESTIMATION
Cost estimation is an important part of the any project. Costs are assumed to fall under project or modeling costs, if the study involves project or modeling performance assessment:
i. Designer cost,
ii. Programmer cost,
iii. Labor cost,
iv. Laboratory cost,
v. Participant cost,
vi. Subject matter expert cost,
vii. User needs assessment cost,
viii. Concept studies cost,
ix. Prototype and usability assessment costs,
x. Modeling and fast-time simulations cost,
xi. Real-time human-in-the-loop simulation cost,
xii. Experiment/study plan development cost,
xiii. Scenario development cost,
xiv. Scenario shakedown cost,
xv. Final simulation cost,
xvi. Data collection cost,
xvii. Data analysis cost, and
xviii. Final report development cost.
Generally, any time project’s data is collected or estimated in a study, experiment, or test, all of the costs affiliated with that data should be considered as a part of project cost.
For example, in the top-down planning approach, the cost estimate is used to derive the project plan:
1. The project manager develops a characterization of the overall functionality, size, process, environment, people, and quality required for the project.
2. A macro-level estimate of the total effort and schedule is developed using a software cost estimation model.
3. The project manager partitions the effort estimate into a top-level work breakdown structure. He also partitions the schedule into major milestone dates and determines a staffing profile, which together forms a project plan.
The actual cost estimation process involves seven steps:
1. Establish cost-estimating objectives
2. Generate a project plan for required data and resources
3. Pin down software requirements
4. Work out as much detail about the software system as feasible
5. Use several independent cost estimation techniques to capitalize on their combined strengths
6. Compare different estimates and iterate the estimation process
7. After the project has started, monitor its actual cost and progress, and feedback results to project management
No matter which estimation model is selected, users must pay attention to the following to get best results:
- Coverage of the estimate (some models generate effort for the full life-cycle, while others do not include effort for the requirement stage)
b) Calibration and assumptions of the model
c) Sensitivity of the estimates to the different model parameters
d) Deviation of the estimate with respect to the actual cost
- Line of Code:
The exact LOC can only be obtained after the project has completed. Estimating the code size of a program before it is actually built is almost as hard as estimating the cost of the program.
A typical method for estimating the code size is to use experts' judgment together with a technique called PERT. It involves experts' judgment of three possible code-sizes:
S_{L}= the lowest possible size;
S_{H}= the highest possible size; and
S_{M}= the most likely size.
The estimate of the code-size S is computed as:
S=(S_{L}+S_{H}+4S_{M })/6
PERT can also be used for individual components to obtain an estimate of the software system by summing up the estimates of all the components.
3. Non-algorithmic Methods
3.1 Analogy costing:
This method requires one or more completed projects that are similar to the new project and derives the estimation through reasoning by analogy using the actual costs of previous projects. Estimation by analogy can be done either at the total project level or at subsystem level. The total project level has the advantage that all cost components of the system will be considered while the subsystem level has the advantage of providing a more detailed assessment of the similarities and differences between the new project and the completed projects. The strength of this method is that the estimate is based on actual project experience.
However, it is not clear to what extend the previous project is actually representative of the constraints, environment and functions to be performed by the new system. Positive results and a definition of project similarity in term of features were reported in [4].
3.2 Expert judgment:
This method involves consulting one or more experts. The experts provide estimates using their own methods and experience. Expert-consensus mechanisms such as Delphi technique or PERT will be used to resolve the inconsistencies in the estimates.
The Delphi technique works as follows:
1) The coordinator presents each expert with a specification and a form to record estimates.
2) Each expert fills in the form individually (without discussing with others) and is allowed to ask the coordinator questions.
3) The coordinator prepares a summary of all estimates from the experts (including mean or median) on a form requesting another iteration of the experts’ estimates and the rationale
for the estimates.
4) Repeat steps 2)-3) as many rounds as appropriate.
A modification of the Delphi technique proposed by Boehm and Fahquhar [5] seems to be more effective: Before the estimation, a group meeting involving the coordinator and experts is arranged to discuss the estimation issues. In step 3), the experts do not need to give any rationale for the estimates. Instead, after each round of estimation, the coordinator calls a meeting to have experts discussing those points where their estimates varied widely.
3.3 Parkinson:
Using Parkinson's principle “work expands to fill the available volume” [6], the cost is determined (not estimated) by the available resources rather than based on an objective assessment. If the software has to be delivered in 12 months and 5 people are available, the effort is estimated to be 60 person-months. Although it sometimes gives good estimation, this method is not recommended as it may provide very unrealistic estimates. Also, this method does not
promote good software engineering practice.
3.4 Price-to-win:
The software cost is estimated to be the best price to win the project. The estimation is based on the customer's budget instead of the software functionality. For example, if a reasonable estimation for a project costs 100 person-months but the customer can only afford 60
person-months, it is common that the estimator is asked to modify the estimation to fit 60 personmonths’ effort in order to win the project. This is again not a good practice since it is very likely to cause a bad delay of delivery or force the development team to work overtime.
3.5 Bottom-up:
In this approach, each component of the software system is separately estimated and the results aggregated to produce an estimate for the overall system. The requirement for this approach is that an initial design must be in place that indicates how the system is decomposed into different components.
3.6 Top-down: This approach is the opposite of the bottom-up method. An overall cost estimate for the system is derived from global properties, using either algorithmic or non-algorithmic methods. The total cost can then be split up among the various components. This approach is more suitable for cost estimation at the early stage.
4. Algorithmic methods
The algorithmic methods are based on mathematical models that produce cost estimate as a function of a number of variables, which are considered to be the major cost factors. Any algorithmic model has the form:
Effort = f(x_{1}, x_{2}… x_{n})
Where {x_{1}, x_{2}… x_{n}} denote the cost factors. The existing algorithmic methods differ in two aspects: the selection of cost factors, and the form of the function f. We will first discuss the cost factors used in these models, and then characterize the models according to the form of the functions and whether the models are analytical or empirical.
4.1 Linear models
Linear models have the form:
Effort = a_{0}+∑a_{i}x_{i}; (i=1 to n)
where the coefficients a_{1}, …, a_{n} are chosen to best fit the completed project data. The work of Nelson belongs to this type of models [8]. We agree with Boehm's comment that "there are too many nonlinear interactions in software development for a linear model to work well.
4.2 Multiplicative models
Multiplicative models have the form:
Effort =a_{0}П a_{i}^{xi} ;(i =1 to n)
Again the coefficients a_{1}, …, a_{n} are chosen to best fit the completed project data. Walston-Felix [9] used this type of model with each xi taking on only three possible values: -1, 0, +1. Doty model [8] also belongs to this class with each xi taking on only two possible values: 0, +1. These two models seem to be too restrictive on the cost factor values.
4.3 Putman’s model
This model has been proposed by Putman according to manpower distribution and the examination of many software projects (Kemerer,2008). The main equation for Putnam’s model is:
[IMG]file:///C:%5CUsers%5Cuser%5CAppData%5CLocal%5CTemp%5Cmsohtmlclip1%5C01%5Cclip_image002.gif[/IMG]……………a*
where, E is the environment indicator and demonstrates the environment ability. td is the time of delivery. Effort and S are expressed by person-year and line of code respectively. Putnam presented another formula for Effort as follows:
[IMG]file:///C:%5CUsers%5Cuser%5CAppData%5CLocal%5CTemp%5Cmsohtmlclip1%5C01%5Cclip_image004.gif[/IMG]…………………b*
where, D0 , the manpower build-up factor, varies from 8(new software) to 27(rebuilt software). By combining equations a* and b*, the final equation is obtained as:
[IMG]file:///C:%5CUsers%5Cuser%5CAppData%5CLocal%5CTemp%5Cmsohtmlclip1%5C01%5Cclip_image006.gif[/IMG]
SLIM (Software Life Cycle Management) is a tool that acts according to the Putnam’s model.
4.4 Model calibration using linear regression
A direct application of the above models does not take local circumstances into consideration.
However, one can adjust the cost factors using the local data and linear regression method. We illustrate this model calibration using the general power function model:
Effort = a*S^{b}.
Take logarithm of both sides and let Y = log (Effort),
A = log (a) and X = log(S). The formula is transformed into a linear equation:
Y = A + b*X
Applying the standard least square method to a set of previous project data {Yi, Xi: i =1… k}, we obtain the required parameters b and A (and thus a) for the power function.
4.5 Discrete models
Discrete models have a tabular form, which usually relates the effort, duration, difficulty and other cost factors. This class of models contains as Aron model , Wolverton model , and
Boeing model. These models gained some popularity in the early days of cost estimation, as they were easy to use.
5. Conclusion
Today, almost no model can estimate the cost of software with a high degree of accuracy. This state of the practice is created because
(1) there are a large number of interrelated factors that influence the software development process of a given development team and a large number of project attributes, such as number of user screens, volatility of system requirements and the use of reusable software
Components:
(2) The development environment is evolving continuously.
(3) The lack of measurement that truly reflects the complexity of a software system.
To produce a better estimate, we must improve our understanding of these project attributes and their causal relationships, model the impact of evolving environment, and develop effective ways of measuring software complexity.
Historical data may prove to be irrelevant for the future projects. The search for reliable, accurate and low cost estimation methods must continue. Several areas are in need of immediate attention. For example, we need models for development based on formal methods, or iterative software process. Also, more studies are needed to improve the accuracy of cost estimate for maintenance projects.
References
- Hareton Leung, Zhang Fan,” Software Cost Estimation”The Hong Kong Polytechnic
- ISBSG, International software benchmarking standards group, http://www.isbsg.org/au..
- N. E. Fenton and S. L. Pfleeger, Software Metrics: A Rigorous and Practical Approach, PWS Publishing Company, 1997
- M. Shepperd and C. Schofield, “Estimating software project effort using analogy”, IEEE Trans. Soft. Eng. SE-23:12, 1997, pp. 736-743
- B. W. Boehm, Software engineering economics, Englewood Cliffs, NJ: Prentice-Hall, 1981.
- G.N. Parkinson, Parkinson's Law and Other Studies in Administration, Houghton-Miffin,Boston, 1957
- J. R. Herd, J.N. Postak, W.E. Russell and K.R. Steward, "Software cost estimation Study results", Final Technical Report, RADC-TR77-220, Vol. I, Doty Associates, Inc.,Rockville, MD, 1977
- R. Nelson, Management HandBook for the Estimation of Computer Programming Costs, ADA648750, Systems Development Corp., 1966.
- C. E. Walston and C. P. Felix, “A method of programming measurement and estimation”, IBM Systems Journal, vol. 16, no. 1, 1977, pp. 54-73.
Cost Estimation Techniques in Computing:A Review
Deepanker Choudhary*
B.tech. Student, Department of Computer Sciences,
Manav Bharti University, Solan, H.P.
E-mail: deepanker06@yahoo.com
Vinod Kumar Bais
Assistant Professor, Department of Mathematics,
Manav Bharti University, Solan, H.P
E-mail: vinod.k4bais@gmail.com
Abstract: This Cost estimation of software projects or models with high accuracy at the conceptual bases programming is planning and obtained feasible solution. However, a number of difficulties arise when conducting cost estimation during the conceptual phase. Major problems faced are lack of preliminary information, lack of database of regarding works costs, data missingness, and lack of an appropriate cost estimation method. This paper focuses on the focus on a more accurate estimation technique for Software projects or models in India at the conceptual phase using various cost estimation methods.
Keywords: Cost estimation, Computer software, Cost model,PERT etc.
1.Introduction
Several indicators should be considered to estimate the software cost and effort. One of the most important indicators which should be noticed is the size of the project. The estimation of effort and cost depends on the accurate prediction of the size. Generally, the effort and cost estimations are difficult in the software projects. The reason is that software projects are often not unique and there is no background or previous experience about them. Therefore, prediction seems complicated.
At the initial stage of a project, there is high uncertainty about these project attributes. The estimate produced at this stage is inevitably inaccurate, as the accuracy depends highly on the
amount of reliable information available to the estimator. As we learn more about the project during analysis and later design stages, the uncertainties are reduced and more accurate estimates can be made. Most models produce exact results without regard to this uncertainty. They need to be enhanced to produce a range of estimates and their probabilities.
To improve the algorithmic models, there is a great need for the industry to collect project data on a wider scale. The recent effort of ISBSG is a step in the right direction [2].
With new types of applications, new development paradigms and new development tools, cost estimators are facing great challenges in applying known estimation models in the new
millenium. Historical data may prove to be irrelevant for the future projects. The search for reliable, accurate and low cost estimation methods must continue. Several areas are in need of immediate attention. For example, we need models for development based on formal methods, or iterative software process. Also, more studies are needed to improve the accuracy of cost estimate for maintenance projects. [1]
- METHOD OF ESTIMATION
Cost estimation is an important part of the any project. Costs are assumed to fall under project or modeling costs, if the study involves project or modeling performance assessment:
i. Designer cost,
ii. Programmer cost,
iii. Labor cost,
iv. Laboratory cost,
v. Participant cost,
vi. Subject matter expert cost,
vii. User needs assessment cost,
viii. Concept studies cost,
ix. Prototype and usability assessment costs,
x. Modeling and fast-time simulations cost,
xi. Real-time human-in-the-loop simulation cost,
xii. Experiment/study plan development cost,
xiii. Scenario development cost,
xiv. Scenario shakedown cost,
xv. Final simulation cost,
xvi. Data collection cost,
xvii. Data analysis cost, and
xviii. Final report development cost.
Generally, any time project’s data is collected or estimated in a study, experiment, or test, all of the costs affiliated with that data should be considered as a part of project cost.
For example, in the top-down planning approach, the cost estimate is used to derive the project plan:
1. The project manager develops a characterization of the overall functionality, size, process, environment, people, and quality required for the project.
2. A macro-level estimate of the total effort and schedule is developed using a software cost estimation model.
3. The project manager partitions the effort estimate into a top-level work breakdown structure. He also partitions the schedule into major milestone dates and determines a staffing profile, which together forms a project plan.
The actual cost estimation process involves seven steps:
1. Establish cost-estimating objectives
2. Generate a project plan for required data and resources
3. Pin down software requirements
4. Work out as much detail about the software system as feasible
5. Use several independent cost estimation techniques to capitalize on their combined strengths
6. Compare different estimates and iterate the estimation process
7. After the project has started, monitor its actual cost and progress, and feedback results to project management
No matter which estimation model is selected, users must pay attention to the following to get best results:
- Coverage of the estimate (some models generate effort for the full life-cycle, while others do not include effort for the requirement stage)
b) Calibration and assumptions of the model
c) Sensitivity of the estimates to the different model parameters
d) Deviation of the estimate with respect to the actual cost
- Line of Code:
The exact LOC can only be obtained after the project has completed. Estimating the code size of a program before it is actually built is almost as hard as estimating the cost of the program.
A typical method for estimating the code size is to use experts' judgment together with a technique called PERT. It involves experts' judgment of three possible code-sizes:
S_{L}= the lowest possible size;
S_{H}= the highest possible size; and
S_{M}= the most likely size.
The estimate of the code-size S is computed as:
S=(S_{L}+S_{H}+4S_{M })/6
PERT can also be used for individual components to obtain an estimate of the software system by summing up the estimates of all the components.
3. Non-algorithmic Methods
3.1 Analogy costing:
This method requires one or more completed projects that are similar to the new project and derives the estimation through reasoning by analogy using the actual costs of previous projects. Estimation by analogy can be done either at the total project level or at subsystem level. The total project level has the advantage that all cost components of the system will be considered while the subsystem level has the advantage of providing a more detailed assessment of the similarities and differences between the new project and the completed projects. The strength of this method is that the estimate is based on actual project experience.
However, it is not clear to what extend the previous project is actually representative of the constraints, environment and functions to be performed by the new system. Positive results and a definition of project similarity in term of features were reported in [4].
3.2 Expert judgment:
This method involves consulting one or more experts. The experts provide estimates using their own methods and experience. Expert-consensus mechanisms such as Delphi technique or PERT will be used to resolve the inconsistencies in the estimates.
The Delphi technique works as follows:
1) The coordinator presents each expert with a specification and a form to record estimates.
2) Each expert fills in the form individually (without discussing with others) and is allowed to ask the coordinator questions.
3) The coordinator prepares a summary of all estimates from the experts (including mean or median) on a form requesting another iteration of the experts’ estimates and the rationale
for the estimates.
4) Repeat steps 2)-3) as many rounds as appropriate.
A modification of the Delphi technique proposed by Boehm and Fahquhar [5] seems to be more effective: Before the estimation, a group meeting involving the coordinator and experts is arranged to discuss the estimation issues. In step 3), the experts do not need to give any rationale for the estimates. Instead, after each round of estimation, the coordinator calls a meeting to have experts discussing those points where their estimates varied widely.
3.3 Parkinson:
Using Parkinson's principle “work expands to fill the available volume” [6], the cost is determined (not estimated) by the available resources rather than based on an objective assessment. If the software has to be delivered in 12 months and 5 people are available, the effort is estimated to be 60 person-months. Although it sometimes gives good estimation, this method is not recommended as it may provide very unrealistic estimates. Also, this method does not
promote good software engineering practice.
3.4 Price-to-win:
The software cost is estimated to be the best price to win the project. The estimation is based on the customer's budget instead of the software functionality. For example, if a reasonable estimation for a project costs 100 person-months but the customer can only afford 60
person-months, it is common that the estimator is asked to modify the estimation to fit 60 personmonths’ effort in order to win the project. This is again not a good practice since it is very likely to cause a bad delay of delivery or force the development team to work overtime.
3.5 Bottom-up:
In this approach, each component of the software system is separately estimated and the results aggregated to produce an estimate for the overall system. The requirement for this approach is that an initial design must be in place that indicates how the system is decomposed into different components.
3.6 Top-down: This approach is the opposite of the bottom-up method. An overall cost estimate for the system is derived from global properties, using either algorithmic or non-algorithmic methods. The total cost can then be split up among the various components. This approach is more suitable for cost estimation at the early stage.
4. Algorithmic methods
The algorithmic methods are based on mathematical models that produce cost estimate as a function of a number of variables, which are considered to be the major cost factors. Any algorithmic model has the form:
Effort = f(x_{1}, x_{2}… x_{n})
Where {x_{1}, x_{2}… x_{n}} denote the cost factors. The existing algorithmic methods differ in two aspects: the selection of cost factors, and the form of the function f. We will first discuss the cost factors used in these models, and then characterize the models according to the form of the functions and whether the models are analytical or empirical.
4.1 Linear models
Linear models have the form:
Effort = a_{0}+∑a_{i}x_{i}; (i=1 to n)
where the coefficients a_{1}, …, a_{n} are chosen to best fit the completed project data. The work of Nelson belongs to this type of models [8]. We agree with Boehm's comment that "there are too many nonlinear interactions in software development for a linear model to work well.
4.2 Multiplicative models
Multiplicative models have the form:
Effort =a_{0}П a_{i}^{xi} ;(i =1 to n)
Again the coefficients a_{1}, …, a_{n} are chosen to best fit the completed project data. Walston-Felix [9] used this type of model with each xi taking on only three possible values: -1, 0, +1. Doty model [8] also belongs to this class with each xi taking on only two possible values: 0, +1. These two models seem to be too restrictive on the cost factor values.
4.3 Putman’s model
This model has been proposed by Putman according to manpower distribution and the examination of many software projects (Kemerer,2008). The main equation for Putnam’s model is:
[IMG]file:///C:%5CUsers%5Cuser%5CAppData%5CLocal%5CTemp%5Cmsohtmlclip1%5C01%5Cclip_image002.gif[/IMG]……………a*
where, E is the environment indicator and demonstrates the environment ability. td is the time of delivery. Effort and S are expressed by person-year and line of code respectively. Putnam presented another formula for Effort as follows:
[IMG]file:///C:%5CUsers%5Cuser%5CAppData%5CLocal%5CTemp%5Cmsohtmlclip1%5C01%5Cclip_image004.gif[/IMG]…………………b*
where, D0 , the manpower build-up factor, varies from 8(new software) to 27(rebuilt software). By combining equations a* and b*, the final equation is obtained as:
[IMG]file:///C:%5CUsers%5Cuser%5CAppData%5CLocal%5CTemp%5Cmsohtmlclip1%5C01%5Cclip_image006.gif[/IMG]
SLIM (Software Life Cycle Management) is a tool that acts according to the Putnam’s model.
4.4 Model calibration using linear regression
A direct application of the above models does not take local circumstances into consideration.
However, one can adjust the cost factors using the local data and linear regression method. We illustrate this model calibration using the general power function model:
Effort = a*S^{b}.
Take logarithm of both sides and let Y = log (Effort),
A = log (a) and X = log(S). The formula is transformed into a linear equation:
Y = A + b*X
Applying the standard least square method to a set of previous project data {Yi, Xi: i =1… k}, we obtain the required parameters b and A (and thus a) for the power function.
4.5 Discrete models
Discrete models have a tabular form, which usually relates the effort, duration, difficulty and other cost factors. This class of models contains as Aron model , Wolverton model , and
Boeing model. These models gained some popularity in the early days of cost estimation, as they were easy to use.
5. Conclusion
Today, almost no model can estimate the cost of software with a high degree of accuracy. This state of the practice is created because
(1) there are a large number of interrelated factors that influence the software development process of a given development team and a large number of project attributes, such as number of user screens, volatility of system requirements and the use of reusable software
Components:
(2) The development environment is evolving continuously.
(3) The lack of measurement that truly reflects the complexity of a software system.
To produce a better estimate, we must improve our understanding of these project attributes and their causal relationships, model the impact of evolving environment, and develop effective ways of measuring software complexity.
Historical data may prove to be irrelevant for the future projects. The search for reliable, accurate and low cost estimation methods must continue. Several areas are in need of immediate attention. For example, we need models for development based on formal methods, or iterative software process. Also, more studies are needed to improve the accuracy of cost estimate for maintenance projects.
References
- Hareton Leung, Zhang Fan,” Software Cost Estimation”The Hong Kong Polytechnic
- ISBSG, International software benchmarking standards group, http://www.isbsg.org/au..
- N. E. Fenton and S. L. Pfleeger, Software Metrics: A Rigorous and Practical Approach, PWS Publishing Company, 1997
- M. Shepperd and C. Schofield, “Estimating software project effort using analogy”, IEEE Trans. Soft. Eng. SE-23:12, 1997, pp. 736-743
- B. W. Boehm, Software engineering economics, Englewood Cliffs, NJ: Prentice-Hall, 1981.
- G.N. Parkinson, Parkinson's Law and Other Studies in Administration, Houghton-Miffin,Boston, 1957
- J. R. Herd, J.N. Postak, W.E. Russell and K.R. Steward, "Software cost estimation Study results", Final Technical Report, RADC-TR77-220, Vol. I, Doty Associates, Inc.,Rockville, MD, 1977
- R. Nelson, Management HandBook for the Estimation of Computer Programming Costs, ADA648750, Systems Development Corp., 1966.
- C. E. Walston and C. P. Felix, “A method of programming measurement and estimation”, IBM Systems Journal, vol. 16, no. 1, 1977, pp. 54-73.