Monday, April 27, 2009
Two-fer
Coming full circle. Brill's paper offers an analysis on the role of modeling in public sector decision making. In much the same vein as Liebman's "Wicked Problem" paper from early in the series, Brill examines the role of optimization and simulation models in the decision making process. Brill examines the current (at the time) literature for examples of optimization models being used as a part of the decision support structure for public projects. Based on these examples he further develops the idea that optimization modeling has moved from being "the way" to "the answer" to being just another tool in the decision making kit. He even refers to the Liebman's analysis of "wicked" public problems to support his position on the role for modeling.
Overall this paper made a nice bookend for the semester literature series. Although I don't really think that Brill moved beyond the ideas put forward by Liebman his examples helped to further round out the concept. The evolution of use of modeling, both simulation and optimization, from problem sovlers to tools for finding possible solutions and developing alternatives is well established between the papers. (Yeah, I'm not really sure what that sentence was supposed to mean, but it sounded good at first) On a completely separate note, I was impressed by Brill's discussion of the need to consider solutions off the Pareto-front and the problem of developing a model in the absence of complete data. The idea that as new dimensions are added to the problem the new optimal solution will be sub-optimal in the earlier dimensions seems like an obvious concept but I had never really thought about before.
And because its the end of the semester... here's number two for the week (no pun intended).
Pan, T., Kao, J. (2009) "GA-QP Model to Optimize Sewer Design"
Kao and Pan employ a joint genetic algorithm/quadratic programming model to explore the design specifications for a municipal wastewater design. Building on the idea of "modeling to generate alternative" (taken from a paper written by Brill in 1982. Go Brill, Go) Pan and Kao use the coupled model system to create generate (genetic algorithm) and test (quadratic program) design alternative to arrive at a series of possible solutions. Although the coupled model system is able to produce multiple designs that satisfy the constraints, the authors point out early in their report that there are still several, mostly socio-political, issues that defy quantization and cannot be considered in the model. This returns us to a re-occurring theme of modeling as a tool to produce possible solutions but not "the answer". The issues that could not be included in the model are things that, in general, are best left up the decision makers anyway (dealing with neighborhoods and NIMBY, public sentiment, etc).
Monday, March 30, 2009
What are you doing Dave?
The authors of this paper used a combined neural network simulation optimization approach to study the optimal management policy for a series of water supply reservoirs in Chennai, India. The neural network based approach provided a large advantage over traditional simulation approaches in terms of analytical speed. The increased computation speed came at the expense of supporting data in that the neural network was designed to provide output in terms of the final value of the optimization function and did not provide any information about the status of other variable within the system. To overcome some of this the authors used the optimal and near optimal results from the neural network as fodder for more traditional Hooke and Jeeves optimization that also allowed them to gather information regarding delivery shortfalls, spillage events and evaporative losses in the system.
To be honest I didn't completely follow the deficiency index system that the authors put forward as an evaluative criteria the inclusion of an equity measure in the index didn't make sense to me on first or second readings. I'm not clear on why equity of shortfalls was included in the evaluation, is this an institutional constraint? Otherwise the paper was fairly impressive, the relative speed with which the neural network was able to handle the neccissary simulations for each scenario (36,000-54,000 runs) is worth noting. The idea of using multiple methods for analysis of water resource problems also seems to be a re-occuring theme in the papers so far. By using the neural network to do the "heavy lifting" the authors were able to find a collection of solutions that could then be further analysed by more traditional methods.
Monday, March 23, 2009
Stormwater Abatement
This week’s paper represents a move away from the employment of large scale projects and land use for controlling storm water runoff in favor of multiple smaller scale infiltration based practices. Perez-Pedini et. al studied a watershed in Massachusetts in an attempt to apply a genetic algorithm approach to finding the optimal method for controlling storm water runoff. They began by dividing the watershed into ~4500 hydrologic response units that could be characterized in terms of their overall contribution to runoff during storm events. They calibrated and validated their distributed model using two events from 2002 and 2003. By comparing the predicted hydrograph at the output of the watershed to the hydrograph from the corresponding storm events they found that the distributed model based on hydrologic response units was able to accurately predict runoff volumes within the watershed.
Perez-Pedini et al then selected ~1900 hydrologic response units within the watershed that met the criteria of either:
1) Being highly impervious (CN > 89) or
2) Were likely to contribute to runoff due to close physical proximity to a discharge stream.
The selected hydrologic response units were used as the fodder for optimizing best management practices within the watershed using a genetic algorithm approach. The genetic algorithm produced a series of possible solutions to achieving a targeted reduction in storm water runoff using the 2003 storm simulation. From these possible solutions Perez-Pedini et. al were able to build a trade off curve that effectively related costs to runoff reduction.
The creation of the trade off curve ties in nicely with a point that was discussed in one of the opening papers of the semester regarding the nature of public (wicked) problems and the role of optimization models. While the GA approach was able to find numerous possible solutions for reducing storm water runoff within the watershed the end result was the creation of a “guide” rather than a absolute answer. Balancing the costs of construction against the needs for runoff reduction requires more direction and inputs than can be easily programmed into a GA or any other optimization method. I also enjoyed that this paper focused on smaller BMPs, e.g. swales and curb cuts, rather than large detention ponds. Detention pond based BMPs can work will in developing areas, where large amounts of land are available and land costs are lower but the BMPs tested in this paper are applicable to areas that are already highly developed. Using the methods of this paper makes it more feasible to redevelop storm water management in highly developed areas to reduce runoff and lessen the impacts of existing developments on stream systems.
Monday, February 23, 2009
Distributions Systems II
In this paper Berry et al. expand on principles established in Lee and Deininger by applying integer programming to the problem of detecting contamination within a water distribution system. Although regulations requiring the monitoring of water quality within a distribution system have been around since the enactment of the Clean Water Act the events of September 11th brought the need for more rigorous monitoring of water distribution systems to the forefront.
In contrast to the previously reviewed work by Lee and Deininger, Berry et al were interested in detecting and minimizing the impacts of intentional contamination of a water supply. As such they were required to take into account the usage and population density at each demand node as a function of time. To further complicate the optimization the site where the contaminant was introduced to the water supply was allowed to vary between optimizations based on time of day and location. By using synthetic probability distributions Berry et al. were able to develop a compromise optimization that was designed to minimize the impacts of an intentional contamination.
Berry et al. followed a design process similar to the methods used by Lee and Deininger by employing increasingly complex hypothetical models before using a real world system. One key difference between the two papers is the location of the sensors within the system. While Lee and Deininger placed their sensors are distribution nodes Berry et al. placed the sensors in the pipes between the nodes. Although this might seem like a small difference it allows for contamination to be detected en route from one node to another and in theory would allow for a response to begin before the contaminant reached the next node. By assigning a weight to each node based on population density and use Berry et al. were able to create an optimized solution that placed sensors in areas where contamination would cause highest potential impacts.
Distribution Systems I
Lee, B.H., Deininger, RA. (1992) “Optimal Location of Monitoring Station In Water Distribution System” J. Env. Eng. 118(1): pp 4 -16
In this paper Lee and Deininger explore the application of integer programming to optimize monitoring stations in simulated and real world water distribution systems. Under the requirements of the Clean Water Act water distribution systems are required to monitor the quality of water within the distribution system to ensure adequate quality. Although the law specifies the standards for drinking water distribution there are no explicit regulations regarding how to establish a compliant network for monitoring water quality within the system. To begin filling this gap Lee and Deininger apply linear programming to two hypothetical distribution systems as a proof of concept experiment before applying the process to two real world distribution networks.
Lee and Deininger began with a simple case using a hypothetical distribution network composed of seven interconnected nodes with known demands at each node. The nodes were linked by eight pipes with known flows between each node. The integer program was based on two assumptions:
1) the quality of water at any given node is representative of the quality at each upstream node and,
2) the quality varies as a function of the flow through each path leading to the node.
Using these two assumptions as the basis for determining the “coverage” at any node allowed Lee and Deininger to design a knowledge matrix the related the degree of network coverage provided by each node. The knowledge matrix was then used in the integer program to determine the optimal location of monitoring stations for network. They then expanded the process from the simple test network and applied it to the distribution network for Flint. Michigan. In doing so they were able to increase the coverage of the distribution network from its current value of ~18% to 54% without increasing the number of monitoring stations. Lee and Deininger followed this step by applying the same process to a hypothetical and real world distribution network that included daily shifts in the demands and flow patterns.
Overall this paper establishes the utility of integer programming for optimizing the location of monitoring stations within water distribution systems. By evolving the process from a simple proof of concept to a simple real world system and then on to more complex hypothetical and real world systems Lee and Deininger demonstrated the robust nature of their process. Unfortunately they neglected to include measures such as water use (domestic v. industrial) and population density associated with each node and instead based their model solely on flow and demand measurements. Including usage and population density would most likely have resulted in a different allocation of the monitoring stations.
Monday, February 9, 2009
"Freedom is the recognition of necessity"
In "The tragedy of the Commons" Hardin explores the inherent weaknesses of the socio-economic view (Post Adam Smith) when applied to areas of common property. Hardin argues against Smith's position that the decisions of individuals tend to be the best decision for society as a whole since each individual agent will act an a manner that increases their own benefits. Hardin argues that while Smith's "invisible hand" might have been true at some point in history it fails to hold up in modern times in the face of increased population density. Hardin's point that the increase in population density combined with "short sighted" individual optimizations leads to the inevitable degradation of common property is supported by the evolution of environmental legislation in this and other countries. As the population has increased in this country laws that progressively replace the common property nature of natural resources with more rigidly defined property rights have become prevalent at both state and federal levels. The impacts of these varying laws can be seen in the development on prior appropriation and riparian rights in water, evolving standards governing air quality and hazardous waste disposal.
Hardin's position about the self-selecting nature of conscience and the pathogenic effects of conscience deserve their own examinations and hopefully I'll get back to those points later in the week. Ultimately Hardin follows the demise of the concept of "the commons" as it is progressively legislated out of existence. Each new piece of legislation is born out of a new realization that "the situation has changed" and often the driver of that change has been the incessant growth in population. And although each new law has further infringed upon the rights of it citizens the uproar caused by the infringement tends to be short lived and is followed by general acceptance. This lead Hardin back to a quote from Hegel, which I shameless used, "Freedom is the recognition of necessity".
Monday, February 2, 2009
Groundwater Remediation
In this paper Atwood and Gorelick explore the applications of linear programming and dispersion modeling to the remediation of a contaminated aquifer. The study focused on a contaminated aquifer under the Rocky Mountain Arsenal in Colorado. The work was divided into two sections to simplify the modeling and to avoid non-linearities within the system. The first section of the work modeled the behavior of a contaminant plume under the influence of a neutral groundwater gradient and a single centrally located remedial pump. Once the optimal site for the remedial pump was identified the change in the plumes geometry over time was simulated using dispersion modeling and groundwater flow equations. The second portion of the work focused on how to use the existing well fields to create a neutral hydraulic gradient around the contaminant plume and effectively hold it in place around the remedial pump. By using linear programming the authors found the optimal combination of pumping and recharge patterns to stabilize and contain the contaminant plume. This modeling required extensive knowledge of the existing hydraulic gradients, transmissivity and saturated depth of the aquifer. The authors used two different approaches in their optimization models. The first divided the mediation project into two management periods and used the knowledge of the contaminant plume geometry in each period to optimize the pumping and recharge operations of each period based on the plumes geometry. The second approach used no advanced knowledge of the plume's expected behavior and optimized based the plume's position in each time point. In both cases the systems were optimized to minimize the amount of pumping or recharge needed to stabilize the plume.
Interestingly, in both cases the overall pumping/recharge was similar although the two methods recommended different patterns of pumping and recharge stabilize the plume. The authors point out that the results of each method would have to be evaluated in an economic context to determine which was ultimately optimal (taking into account the costs of recharge water and the pumping expenses). This reinforces the point made by Liebman that in complex systems the optimization models can serve as decision making tools but not as the final word in decision making process.
Monday, January 26, 2009
Assignment One
Liebman, J. (1976) “Some simple-minded observations on the role of optimization in public systems decision making” Interfaces 6(4):102-108
In his article “Some simple-minded observations on the role of optimization in public systems decision making” Jon Liebman advances the idea that the inherent differences between public and private systems operations result in the need for new approaches to applying optimization models. In contrast to the systems that had previously yielded successful optimization models, e.g. private industry and military applications, the nebulous nature of public systems causes tried and true optimization methods and results to be equally nebulous. Public systems are by nature impacted by multiple view points, conflicting or unclear objectives and unlike private systems have to incorporate both “winners and losers” in the final decision. Ultimately, Liebman borrows from (and agrees with) previous authors on the subject in describing the optimization of public systems “wicked” problems.
In spite of the “wicked” nature of problems regarding the optimization of public systems Liebman goes on to suggest a series of ideas that can help to make these problems more tractable. The majority of the suggestions focus on the process of model creation. The fact that public systems are subject to the multiple stakeholder groups suggests that there is no single model that best describes the system. Liebman goes on to argue that: 1) each group should produce its own models and 2) the models should be as concise as possible. By having each group produce its own model of the system the groups are able to convey what they consider to be the most important elements of the systems and their perception of the relationships within the system. By creating “simple” models stakeholder groups can ensure that the their ideas are easily understood by other groups and decisions makers. Ultimately Liebman states that in the public systems arena the optimization models are best served as tools to aid in the decision making process. This is a change from the historic application of optimization models as the final word in decision making. Liebman's paper argues for the creation of more models and simpler that incorporate a variety of views to help decision makers.
Heidari, M. (1982) “Application of Linear System's Theory and Linear Programming to Ground Water Management in Kansas” Water Resources Bulletin 18(6):1003
Manoutchehr Heidari applied linear programming to study the management options of a small aquifer in central Kansas. The aquifer that underlies the Pawnee Valley in Kansas is used primarily to supply water for crop irrigation and had shown significant decline in storage in the years preceding the study. Heidari used the known hydrologic characteristics of the aquifer to create a model of the ground water flow in the aquifer. Through the use of linear programming he was able to create a series of functions that were bounded by physical and “legal” constraints on well pumping. The solutions to the linear program allowed for Heidari to evaluate the the short (five year) and longer (ten year) term viability of individual well fields under various pumping constraints. The end results showed that even under the most constrained pumping regimes the aquifer was highly over appropriated and that several well fields would need to be taken off line to meet the legal allocations of the aquifer.
Friday, January 23, 2009
Assignment Zero
I enrolled in CVEN 665 to increase my knowledge of practical applications of systems theory and to further my knowledge of model development, evaluation and optimization. In addition to this course I an also enrolled in a course focused on conceptual model development and modeling system interactions. My hope is that while my other courses will further my understanding of how to develop the "big picture" of a systems model, this course will help me to understand the functions required to make models useful as predictive tools.
Critical Thinking:
To me critical thinking, or critical analysis, is the ability to deconstruct the individual pieces of a statement and to evaluate the each piece on its and in the context of the larger overall statement. This process allows for a more thorough analysis of the statement and can provide insights into the creators biases and intents. A good introductory book on the subject of critical analysis is "The Art of Deception" by Nick Capaldi.