How to create objective function for genetic algorithm?
Answers
Answered by
0
Objective functions are supposed to quantify how good a set of parameters is. It doesn't matter how you got the parameters; for example, it doesn't matter that you're using genetic algorithms. You just need to create some function that says how well the parameters work.
So you're designing a filter, which requires that the filter's parameters be optimizedto fit its application, and you'd like to use genetic algorithms for the optimization?
You're correct that the filter's parameters should be used to construct the chromosomes in the genetic algorithm. The genetic algorithm then does its work by:
Calculates how well its chromosome (combination of parameters) performs.
Discards the chromosomes that didn't perform well.
Modifies (mutates) the parameter sets (chromosomes) that did perform well.
Goes back to Step (1), looping until either it finds a parameter set that's good enough or it gives up (fails to converge).
However, how should the algorithm assess how "good" the parameter sets are? That's what the objective function's for. The objective function takes the chromosome as input, then produces a result that quantifies how good that chromosome is.
Since chromosomes are sets of parameters, they're often represented as a vector (array). For example, if your filter's, say,
f(x)=ax2+bx+c,f(x)=ax2+bx+c,
then the chromosome's the array of the parameters {a,b,c}{a,b,c}.
The question statement doesn't specify your filter or exactly what sort of problem you're working on. However, since you want to use sum-squared error, that implies that your filter produces an output with quantifiable error.
So, presumably, your objective function would be
error=∑∀data i(fmeasured(xi)−fmodel(xi))2,error=∑∀data i(fmeasured(xi)−fmodel(xi))2,
or something like that. Some folks prefer to use the root-mean-square, i.e.
error=∑∀data i(fmeasured(xi)−fmodel(xi))2−−−−−−−−−−−−−−−−−−−−−−−−−−√,error=∑∀data i(fmeasured(xi)−fmodel(xi))2,
but that shouldn't matter here.
In general, I'd suggest seeing genetic algorithms as stepping functions. Their core duty is to produce new guesses during an optimization process. Objective functions live elsewhere in the optimization process's structure; this is, objective functions don't care how you're stepping - whether you're using genetic algorithms, Newton's method, Simplex, gradient descent, bisection, random guessing, or whatever else - it doesn't matter to the objective function. All the objective function has to do is say how good a guess is, regardless of how that guess was generated.
So you're designing a filter, which requires that the filter's parameters be optimizedto fit its application, and you'd like to use genetic algorithms for the optimization?
You're correct that the filter's parameters should be used to construct the chromosomes in the genetic algorithm. The genetic algorithm then does its work by:
Calculates how well its chromosome (combination of parameters) performs.
Discards the chromosomes that didn't perform well.
Modifies (mutates) the parameter sets (chromosomes) that did perform well.
Goes back to Step (1), looping until either it finds a parameter set that's good enough or it gives up (fails to converge).
However, how should the algorithm assess how "good" the parameter sets are? That's what the objective function's for. The objective function takes the chromosome as input, then produces a result that quantifies how good that chromosome is.
Since chromosomes are sets of parameters, they're often represented as a vector (array). For example, if your filter's, say,
f(x)=ax2+bx+c,f(x)=ax2+bx+c,
then the chromosome's the array of the parameters {a,b,c}{a,b,c}.
The question statement doesn't specify your filter or exactly what sort of problem you're working on. However, since you want to use sum-squared error, that implies that your filter produces an output with quantifiable error.
So, presumably, your objective function would be
error=∑∀data i(fmeasured(xi)−fmodel(xi))2,error=∑∀data i(fmeasured(xi)−fmodel(xi))2,
or something like that. Some folks prefer to use the root-mean-square, i.e.
error=∑∀data i(fmeasured(xi)−fmodel(xi))2−−−−−−−−−−−−−−−−−−−−−−−−−−√,error=∑∀data i(fmeasured(xi)−fmodel(xi))2,
but that shouldn't matter here.
In general, I'd suggest seeing genetic algorithms as stepping functions. Their core duty is to produce new guesses during an optimization process. Objective functions live elsewhere in the optimization process's structure; this is, objective functions don't care how you're stepping - whether you're using genetic algorithms, Newton's method, Simplex, gradient descent, bisection, random guessing, or whatever else - it doesn't matter to the objective function. All the objective function has to do is say how good a guess is, regardless of how that guess was generated.
Similar questions
English,
8 months ago
Social Sciences,
8 months ago
Computer Science,
8 months ago
Biology,
1 year ago
Science,
1 year ago
Science,
1 year ago