IoT Data quandry
Choking the Whale
May 15, 2017
Sensing What?
Is IoT Somehow Psychic?
May 22, 2017
Scales weighing price against value in acquisition process

The SMODA approach allows a bidder to offer what the government seeks; best value for the best price.



Some Federal acquisitions[1] are turning to multi-objective criterion to evaluate competing bids.  The goal is to be objective about “best value” and avoid successful bid protests.[2]

Bidders responding to this approach face a complex hyperspace of potential offerings.  There are many ways to offer a solution with the same value, so value-price comparisons and tradeoffs can seem intractable.  A further complication is the government’s incomplete disclosure of value weightings among requirements and Key Performance Parameters (KPPs).  Hyperdimensionality and incomplete mathematical definition of the trade-space render deterministic methods useless.[3]

To overcome these challenges, and rapidly evaluate a bidder’s alternatives for offering “best value” a new approach was needed.  Stochastic Multi-Objective Decision Analysis (SMODA) methods described in this paper can quickly evaluate the value-price trade-space.  The SMODA approach allows a bidder to offer what the government seeks; best value for the best price.

The SMODA evaluation assesses overall system utility. System Utility is formed from lower level utility curves associated with selected requirements.  Subject matter experts provide insights to construct these utility curves. Many (more than one-hundred) requirements can be included, with emphasis placed on the KPPs.  Government threshold and objective criteria are included with a span of uncertainty, for the most important (heavily weighted) requirements and/or KPPs.

This approach estimates “best value” even when RFP instructions don’t fully disclose weighting criteria.

This paper discusses the architecture of a SMODA model. It explains how requirements and KPPs are categorized and weighted.  A trade-space of system utility (value) versus cost is generated from the model.


Assume the government issued a request for proposals (RFP) for a GPS locator.  This product has several requirements including size, mass, range, availability, and reliability.  Furthermore, assume two requirements; mass and availability, are identified as KPPs.

The RFP seems to suggest a MODA method will be used to evaluate the proposals,[4] but does not disclose MODA weighting criteria, and in some cases, the rationale for establishing “thresholds” and “objectives” are not disclosed either.[5]

The first step in creating an SMODA model is to group the various requirements according type.  In practice, categories naturally occur based on the system (subsystems) being built.  Size, mass, and other physical requirements are categorized together; while availability, reliability and other support requirements are categorized together.  Another grouping, performance, would include accuracy, mission operating time, etc.

After categorizing the requirements, the next step is to quantify whether a particular requirement has been met or not.  For these “binary” requirements, a non-zero or zero score is assigned accordingly.  For requirements measured by “threshold” and/or “objective,” scores are assigned as to whether a particular requirement has met or exceeded the threshold, or met or exceeded the objective.

Of course, if the requirement does not even meet the threshold, it is assigned a score of zero.  The requirements within a category are then summed to obtain a category subtotal.  Furthermore, each category is weighted according to its importance to obtain a weighted category subtotal.  Finally, all the categories are summed to obtain an overall system utility score.[6]


A notional MODA influence architecture is shown in the figure below.  The figure represents (1) the construct the proposal evaluators are likely to use in their MODA determination and (2) the SMODA model architecture.

Notice how the requirement “Size” (gray rectangle) in the PHYSICAL group has no factor applied to it (no yellow oval).  It is simply an addend for the “Subtotal, PHYSICAL, unweighted.”  This is because “Size” is binary” – either the requirement is met or it isn’t.  If it is, it is assigned a utility, which is a fraction of the unweighted subtotal for its group.

Compare this with the requirement “KPP1 (mass)” (gray rectangle) in the PHYSICAL group which does have a factor, “KPP1 factor” (yellow oval) applied to it before being summed into “Subtotal, PHYSICAL, unweighted.”  This is because “KPP1 (mass)” is a threshold/objective requirement and subsequently scored differently.

Obviously, if the threshold is not met, the score is zero.  If the threshold is met, but not the objective, a score of 1 is assigned.  If the objective is met, a bonus, or a factor greater than 1 is assigned.  The scores of all threshold/objective requirements must have a factor applied for its contribution to be a fractional part of the “Subtotal, PHYSICAL, unweighted.”

All fractions for the binary requirements and all factors for the threshold requirements must sum to unity.  Similarly, the weights on each group must also sum to unity (or 100%).  Numerical values for binary utilities and their fractional contributions, threshold/objective utilities and their factors (and possible bonuses), as well as category weights were all obtained through subject matter expert elicitation.


Because requirement inputs represent a range[7] of uncertainty, the bidder can conduct an analysis of alternatives trade study to explore the impact of each requirement without knowing exactly what MODA weighting the government evaluators will use.

The model also accommodates uncertainty in how evaluators will judge performance.   The stochastic approach provides rich insight into system feature impact on the “Total SYSTEM” score.

It also provides insight into the tolerance for uncertainty.  SMODA model evaluation provides understanding that a feature must be judged by the evaluators to be above some threshold (even if the threshold has not been disclosed).  This provides the bidder with insight about what evidence to emphasize in a page limited proposal.

Upon evaluating the model, an offer which complies with requirements would nominally score 100.  Because bonuses can be awarded issued for meeting “objectives” the score might exceed 100.  Conversely, if some requirements are not met, the score will be less than 100.

Areas where sub-scores are below nominal expectations can be offset by some “greater than threshold” scores.  For a bidder, this might be a strategy to offer “best value” if their approach can inexpensively increase the MODA score.

However, at some point additional performance has little or no value.

The SMODA approach helps to highlight these areas of diminishing return, where even one dollar spent in pursuit of being “better” will not result in a “better” MODA score by proposal evaluators.

This example illustrates two competencies required for SMODA evaluation; processes and tools. Construction and evaluation of the SMODA model requires disciplined processes for eliciting the trade-space features and weighting.  There is a considerable temptation for a bidder to project biases into what appears to be an objective analysis.  The SMODA approach also requires a simulation environment[8] with the capacity to rapidly run large Monte Carlo sets, and to quickly consider alternatives.

Bidders participating in bid opportunities using MODA can also gain significant insight by using the same model to evaluate their expected competitor’s offerings.

This short paper cannot illustrate all the ways a SMODA model can also be used to evaluate potential competitors’ offerings and assist the bidder in strategizing how best to position to win the “Best Value” competition.  It should be clear that a bidder using the SMODA method has a significant competitive advantage over competitors who fail to use it.


MODA methods are likely to continue to see increased use.  As competitive procurements adopt MODA methods, successful bidders will adapt to provide their customers with “best value” as defined by the customer.   However, the government is unlikely to fully disclose MODA weights or all explicit criteria.

To explore the best value a bidder can offer, uncertainty must be explicitly considered.  The SMODA method described in this paper, and pioneered by the authors is the most powerful means devised for this purpose.

SMODA can create significant competitive advantage.  It offers asymmetric insight and understanding over competitors who fail to employ the SMODA method.


Keeney, R., and Raiffa, H., Decisions with Multiple Objectives: Preferences and Value Trade-Offs, 1993

DTIC ( provides several papers by the USAF and others on MODA for evaluation of capability and bids, including some foreign uses of MODA.


GAO Bid Protests: Trends and Analysis


Lone Star’s Multi-Criteria Modeling Showcased at INFORMS


Black Swans: Frequent Once-in-a-Lifetime Crises


Stochastic Optimization in a Probabilistic Simulation Environment


[1] USAF procurements have pioneered this approach, and other agencies in the United States and abroad are adopting this method as well. For more information see the reference cited in the end-notes.


[2] Lowest Price, Technically Acceptable (LPTA), another method to avoid protests, resulted in stripped down, inflexible acquisitions.  Multi-Objective evaluation criteria are used to promote innovation, allow diverse bidders, and seek “best value” while at the same time, avoiding protests based on claims the government was arbitrary in determining what constituted “best value.”


[3] A government evaluation team will use deterministic Multi-Objective Decision Analysis (MODA).  But failure to disclose complete criteria, and uncertainty in how evaluators judge the criteria, makes MODA highly uncertain, and NOT deterministic from a bidder’s point of view. For more information on MODA, see references in the end-notes.

[4] In most federal RFPs, this will be used for the “technical” evaluation, which means all criteria other than pricing.  The MODA score can be completed by one team which need not know a bidder’s price, while a separate government team evaluates costing and pricing.  The SMODA model can be limited to the “technical” value, or can include cost/price to represent what both evaluation teams will do.


[5] There are many reasons why RFPs are issued with such incomplete information.  A discussion of this topic is beyond the scope of this paper, but incomplete disclosure is a structural feature of government RFPs, and won’t change any time soon.


[6] Keep in mind, any of these “binary” or “threshold/objective” requirements could also take the role of a KPP with factors and weights assigned as explained below.


[7] They are “random variables” which the model represents with stochastic methods, hence the “S” in SMODA.

[8] The example in this paper uses TruNavigator® the modeling environment most commonly used by the authors.


BY: Randal Allen, Dave Lundquist, Ricardo Lopez, and Steve Roemerman

Lone Star Analysis

Addison, TX