This site uses cookies. To see our cookie policy and how to control cookies click hereCookies

Back to Modelling & Simulation

What's the best?

Using optimisation methods

Optimisation is of great importance to many operations. It is the art of finding the best way to design or operate that achieves the most with the least effort or material use. By way of a simple example consider a humble cylindrical can used for food storage and preservation. An optimisation of the design might consider, for a given storage volume, what is the radius and height of the design that uses the least amount of metal? It is fairly easy to write an expression for how the surface area of metal used varies with the radius (the associated height is linked to the radius as the volume of the cylinder is a constant). The result can be found using a plot, or by calculus, and leads to the conclusion that food cans should be cyclinders with a height that is equal to their radius. A quick comparison with actual designs shows this isn't the case, as shown below.



The reason why actual designs are different may arise from the fact that food producers often sell food cans in two sizes; a full-sized can, and a half-sized can, as shown above. This, then, becomes a problem in co-optimisation, which is very common. As a rule of thumb, optimising the individual components of a system does not generally lead to optimisation of the combined system.

For the co-optimisation of different sized cans some constraints should be considered. For ease of stacking the cans should have an equal radius, and the small can should be half the height of the large one. Again, it is reasonable easy to plot how the surfacte area of the combined production varies with product radius when the small cans are produced in the ratio n : 1 with the large ones, as shown below.


In this diagram, the thick dotted line connects all the radii for the best radius for the different production ratios n : 1. The thin dotted line shows a typical production radius. It is seen that if the production ratio of small : large is 2 : 1, the observed radius is approximately optimal. It is also apparent that the ratio can change from about 1 : 1 to about 6 : 1 or more and the radius remains very close to optimal.

Consider the case where cylinders of volume \(V_0\) and \(2V_0\) are co-manufactured in the ratio of \(n:1\) subject to

  • Both cylinders share a common radius, \(r\).
  • The cylinder with volume \(V_0\) has a height \(h\) and the cylinder with volume \(2V_0\) has a height \(2h\).
The surface area of the combined cylinders is \begin{aligned} A &= \underbrace{2\pi r^2 + 4 \pi rh}_{\text{large cylinder}} + \underbrace{n \left( 2\pi r^2 + 2 \pi rh \right)}_{\text{small cylinders}} \\ &= 2\pi \left( r^2 \left( 1+n \right) + rh\left( 2+n \right) \right) \end{aligned} The variable \(h\) can be eliminated by noting that $$ V_0 = \pi r^2 h \Rightarrow h=\frac{V_0}{\pi r^2}.$$ giving $$ A = 2 \left( \pi r^2 \left( 1+n \right) + \frac{V_0}{r}\left( 2+n \right) \right), $$ which is plotted above in a series of curves for different values of \(n\) and \(V_0=220\). Using calculus, the minimum is given when $$ \frac{dA}{dr} = 2 \left( 2\pi r \left( 1+n \right) - \frac{V_0}{r^2}\left( 2+n \right) \right) = 0.$$ Solving for \(r\) gives $$r=\root3\of{\frac{V_0 \left( 2+ n\right)}{2\pi\left( 1+n \right)}} $$ When \(n=0\) (no small cylinders) the radius of the large cylinder is given by \(r=\root3\of{\frac{V_0 }{\pi}}\), and its height \(2h\) (recall the definition above) is given by \(2h=\frac{2V_0}{\pi r^2}\). Solving for these shows that for a minumum surface area the large cylinder is characterised by \(r=h\), i.e. the diameter, \(2r\), equals the height, \(2h\). Other values as plotted arise when \(n>0\).

The problem above can easily be solved using graphical or calculus methods. However, there are many cases where this is not the case because it is hard to write expressions that capture how an optimising variable changes with configuration, even though it is reasonably easy to write an expression for how optimal a configuration is. An example of this might be cutting shapes from a sheet of material; it's easy to measure how much wasted material there is, it's much harder to write a function that predicts exactly where each shape should go. In such cases other methods are required. Some of these use rules and perform systematic searches, others, such as genetic algorithms and simulated annealing start with random configurations and apply methods to guide how the cofiguration changes towards the optimal.

A gentic algorithm
A genetic algorithm is a method of optimisation that draws its inspiration from Darwinian Selection. The state of a problem is encoded in a set of "genes" in a population of "individuals". These "individuals" are ranked by how well they score against the desired measure and those that rank highest pass their genes on to the next generation by "mating" to produce offspring with a mixture of genetic characteristics of both parents. In the case of the can problem above, the genetic sequence of an individual is given by a set of digits that encode the radius of the can. For example, a radius of 3.23 is encoded by the gene sequence 323. The fitness of the individual is given by the can surface area this radius gives. Those with lower values are higher ranked and will mate more frequently. Mating to produce an offspring is achieved by constructing a new gene sequence where each gene has a random 50% chance of being derived from one or the other parent, as shown in the example below where a parent with radius 3.23 mates with another of radius 4.19 to produce an offspring with radius 3.19.


The above process is not enough however. The fitness criterion applied will eventually make the population genetically uniform and the process will stop. To allow for this another feature of Darwian Evolution is borrowed: the concept of random mutation. Under random mutation, each genetic element of the offspring is allowed to change randomly to another digit with probability \(P\). Note, \(P\) should be a reasonably small value otherwise the mutations will occur too frequently and the selection process will not work efficiently.

The example below demonstrates how a genetic algorithm with a population size of 20 can be used to determine the best radius for the can problem shown above (chosen, because we know what the answer should be). Once started, the population are ranked and used to produce offspring. These replace the parent population and the process repeats.



The above demonstration has a limited population size of 20. This may not be enough to explore the range of variations possible in the offspring in any one generation, and as a consequence the results can occasionally become 'stuck' in a genetically homogeneous state. An obvious solution to this is to increase the population size.

One of the problems with using a genetic algorithm is that you don't quite know when to stop it. In the example above, the best radius will hover close to the target radius but may not always be equal to the target radius. One possible strategy is to stop when the top N entries are the same. Another could be to stop when the top N population values are within some user-defined value (in the above 0.0015 may be appropriate) of their mean. In these cases some experimentation may be required to determine what value of N to use. A initial assessment value of N = 5 or 10 could be used.

It should be noted, that even when a stopping criterion has been established the answer may not be acceptably close to the optimum value. This is obvious in the above demonstration because the optimal value is known. It is less obvious if you don't know what the optimal value actually is. This is related to another possible problem which is that you don't know whether the genetic algorithm has found the global best value or a local best value; i.e. is there only one minimum to find or are there several and the algorithm has got stuck inside just one of them? To try to avoid these issues, the genetic algorithm should be run a number of times, each with a different random starting population, and the best result obtained used. Even then, you cannot guarantee the answer given is the best answer but it should be reasonably close. The above example typically gets to within 1% of the optimal value before genetic homogeneity stops the process from getting any further.

Previous

Next