This site uses cookies. To see our cookie policy and how to control cookies click hereCookies

Using optimisation methods

Optimisation is of great importance to many operations. It is the art of finding the best way to design or operate that achieves the most with the least effort or material use. By way of a simple example consider a humble cylindrical can used for food storage and preservation. An optimisation of the design might consider, for a given storage volume, what is the radius and height of the design that uses the least amount of metal? It is fairly easy to write an expression for how the surface area of metal used varies with the radius (the associated height is linked to the radius as the volume of the cylinder is a constant). The result can be found using a plot, or by calculus, and leads to the conclusion that food cans should be cyclinders with a height that is equal to their radius. A quick comparison with actual designs shows this isn't the case, as shown below.The reason why actual designs are different may arise from the fact that food producers often sell food cans in two sizes; a full-sized can, and a half-sized can, as shown above. This, then, becomes a problem in co-optimisation, which is very common. As a rule of thumb, optimising the individual components of a system does not generally lead to optimisation of the combined system.

For the co-optimisation of different sized cans some constraints should be considered. For ease of stacking the cans should have an equal radius, and the small can should be half the height of the large one. Again, it is reasonable easy to plot how the surfacte area of the combined production varies with product radius when the small cans are produced in the ratio

In this diagram, the thick dotted line connects all the radii for the best radius for the different production ratios

- Both cylinders share a common radius, \(r\).
- The cylinder with volume \(V_0\) has a height \(h\) and the cylinder with volume \(2V_0\) has a height \(2h\).

The problem above can easily be solved using graphical or calculus methods. However, there are many cases where this is not the case because it is hard to write expressions that capture how an optimising variable changes with configuration, even though it is reasonably easy to write an expression for how optimal a configuration is. An example of this might be cutting shapes from a sheet of material; it's easy to measure how much wasted material there is, it's much harder to write a function that predicts exactly where each shape should go. In such cases other methods are required. Some of these use rules and perform systematic searches, others, such as genetic algorithms and simulated annealing start with random configurations and apply methods to guide how the cofiguration changes towards the optimal.

The above process is not enough however. The fitness criterion applied will eventually make the population genetically uniform and the process will stop. To allow for this another feature of Darwian Evolution is borrowed: the concept of random mutation. Under random mutation, each genetic element of the offspring is allowed to change randomly to another digit with probability \(P\). Note, \(P\) should be a reasonably small value otherwise the mutations will occur too frequently and the selection process will not work efficiently.

The example below demonstrates how a genetic algorithm with a population size of 20 can be used to determine the best radius for the can problem shown above (chosen, because we know what the answer should be). Once started, the population are ranked and used to produce offspring. These replace the parent population and the process repeats.

The above demonstration has a limited population size of 20. This may not be enough to explore the range of variations possible in the offspring in any one generation, and as a consequence the results can occasionally become 'stuck' in a genetically homogeneous state. An obvious solution to this is to increase the population size.

One of the problems with using a genetic algorithm is that you don't quite know when to stop it. In the example above, the best radius will hover close to the target radius but may not always be equal to the target radius. One possible strategy is to stop when the top

It should be noted, that even when a stopping criterion has been established the answer may not be acceptably close to the optimum value. This is obvious in the above demonstration because the optimal value is known. It is less obvious if you don't know what the optimal value actually is. This is related to another possible problem which is that you don't know whether the genetic algorithm has found the global best value or a local best value; i.e. is there only one minimum to find or are there several and the algorithm has got stuck inside just one of them? To try to avoid these issues, the genetic algorithm should be run a number of times, each with a different random starting population, and the best result obtained used. Even then, you cannot guarantee the answer given is