1 Introduction
Genetic Algorithms (GAs) are populationbased metaheuristic algorithms. They were first introduced in [16]
, and have demonstrated effectiveness on a wide range of problems, from constrained optimization to design and optimization of neural networks and other classifiers, see e.g.
[25, 24]. They are known for their flexibility in terms of the choice of parameters (population and pool sizes, selection function, elitism selection, etc) and encodings: binary, integer and real (float).In the GA framework solutions to a problem are encoded as strings, the fitness of the strings evaluated according to a problemspecific function, and then strings are chosen to enter a ‘mating pool’ as some function of their fitness. Strings in the mating pool are recombined using genetic operators, and these new offspring strings specify a new population. Over many iterations of the algorithm strings with better fitness (that is, better solutions to the problem) evolve. In general, there are two types of genetic operators that are used: crossover and mutation. Mutation varies the values of entries of a single string, and is effectively a version of local search, exploiting the knowledge encoded in the current string and seeking local improvements in it. In contrast, crossover operators recombine genetic information between parents, which enables a form of global search, exploring the search space more effectively. It has been shown both experimentally and (for some problems) theoretically that GAs with versions of crossover and mutation outperform mutationonly algorithms such as hillclimbers and Randomized Local Search (see e.g. [6, 26]).
While the original GA was based on a binary string, variations based on integer values and floating point numbers are common. RealCoded Genetic Algorithms (RCGAs) use chromosomes (strings) of floating point numbers and are particularly useful for solving problems where ‘realvalued’ encoding arises naturally, e.g. training of classifiers, such as neural networks, see [2], signal processing, constrained optimization problems, etc. They are also particularly useful for higherdimensional optimization problems, where binary encoding is simply infeasible. Readers are referred to [8] for a concise explanation of advantages of RCGAs, and [32] for an extended one.
The development and study of new genetic operators for RCGAs is both a historically rich and active topic of research. Some of the seminal papers include [8, 1, 4, 27, 32, 21]. Recently, in [14, 15] hybrid crossover operators were thoroughly analyzed, in [34] distancebased crossover operators were studied, in [30] a double Pareto crossover was introduced and successfully tested on a range of multimodal test functions. Also in [30, 33] a substantial overview of the history of RCGA crossover operators is given. In many cases parents can generate more (or less) than the standard set of two offspring or an offspring can have more than two parents (see e.g. [19, 13, 7]).
In this paper we follow in these footsteps, and introduce two realcoded versions of the KBit Swap (KBS) genetic operator, which is a crossover operator that enables the location of elements of the string to change (transpose) when crossover occurs. We evaluate its use with a RCGA solving multidimensional multimodal realcoded optimization problems, both alone and together with other genetic operators. A version of KBS was introduced in [29] (see also [28] for the extended version of this work) and showed some promise mostly for binarycoded functions.
2 Connection with GA Theory, Exploration vs Exploitation at Population and Gene Levels
Just as with binary GAs, empirical results show a statistically significant advantage of RCGAs with crossover/recombination operators, when compared with mutationonly algorithms. A few probable reasons for this are that crossovers:

reduce the probability of premature convergence to local optima,

extend the list of possible offspring that can be generated (especially when probability distributions over offspring are used),

extend the exploration of fitness space to subsets that are far from the current solution,

combine exploration and exploitation to some degree, while mutation is primarily an exploitation operator,
The last point is of particular interest when it comes to development of new crossover operators. In the GA community there is little clarity about the definition of the terms ‘exploration’ and ‘exploitation’. Intuitively, exploration deals with sampling from entirely new segments of genotype, while exploitation focuses on sampling in the vicinity of existing solutions. In [3] authors give an overview of approaches to this issue since the early years of GA theory. In short, they can be described as follows:

Genotypebased, e.g. a distance between individuals of some sort (Manhattan, Hamming, etc),

Phenotypebased, e.g. a number of different phenotypes (fitness values) in a population that is used to determine diversity.
Hence, the definitions of exploration and exploitation depend on the definition of diversity in the population: finding new individuals outside of the ‘basin of similarity’ between existing members of the population would be considered exploration. Balancing these two processes, referred to as diversity maintenance (which can be achieved by e.g. niching, crowding, mating restrictions, selection pressure) is crucial to the success of the algorithm. Too much diversity (exploration) can harm by reducing the attention to promising basins around local optima and lack of it (exploitation) reduces the chances of finding good new local optima.
Approaches mentioned above are populationlevel tools. In this paper we try a genelevel approach (as in, e.g. [13, 7]): given that we have two parents producing two new offspring, there is some distance between the parents and the offspring. Once we have selected the genes/bits for recombination, we use the crossover operator to exchange the information between these genes. Distance from the parents depends on this operator: if it does search strictly in the landscape between the gene values, it is an exploitation operator; if, instead, the operator extends the search space (but doesn’t fully exclude the interval between the gene values), it is biased towards exploration.
Arithmetical crossover (AX) is the example of the first type of operator (see [20]):
where are parental gene values, are offspring’ gene values. Clearly both and lie strictly between , so this operator exploits the interval between the sampled values in each parent. A good example of an explorationbiased operator is BLX (see [8]):
The last expression means that new values for the gene are sampled from this interval, which stretches beyond the interval by a factor of . For example, very high values of this parameter almost triple the interval between gene values. In [8] this operator was introduced as an example of interval schemata.
In Section 3 we introduce two new KBS operators that fit the definition of exploitationbiased operators. Nevertheless, we argue that due to their main property they allow a certain degree of landscape exploration.
3 KBitSwap Genetic Operator for RealCoded Problems
The original KBitSwap operator was developed for predominantly binaryvalued GAs. It was presented in [29], and a pseudocode description is given in Figure 1. The operator is a form of crossover where, instead of bits being swapped between the strings with their location in the string held fixed, which is standard with most crossovers, the location of the bit in the new string was chosen uniformly at random. The results in [29] demonstrated that the operator improved the results of a GA on numerical optimization problems, but not on an integercoded variant of the Travelling Salesman Problem.
It is perfectly possible to transfer this version of the KBS to realvalued GAs without substantial changes, just like a standard crossover can be transferred. However, given the success of Arithmetical Crossover (AX) and Blend Crossover (BLX) in RCGAs, we use this convention and consider instead variants of KBS that follow the idea behind these two operators, as well as the original KBS, as described next.
3.1 Kbs
The main idea behind KBS is to enable swapping of genetic information between different schemata without binding the choice of the location in the second parent to the location selected in the first parent. Discussion of schemata theory in the context of GAs is beyond the scope of this article, relevant information can be found in [11, 23, 22]. It is only worth mentioning here that unlike the bulk of crossovers, KBS operators recombine information from unrelated schemata. Simple tweaking of the original KBS idea with an additional feature (parameter ) similar to the one used in AX and BLX is presented in Figure 2 (note that the version of KBS described previously is recovered with ).
This operator resembles both AX and BLX in the sense that a value in a parent’s gene is multiplied by a coefficient ( or ). However, KBS samples the value to swap with the first parent’s value uniformly in the second parent, rather than taking it from the matching location in the string, so that genes from unrelated features (schemata) are swapped. In the current setting, this operator has a greater exploitation, but lesser exploration capacity because the region available for sampling a new value lies strictly between two parental values. This drawback is partly overcome by the main property of randomized bit selection, a feature that AX does not have. Also, unlike BLX we do not use an interval around parents’ values.
3.2 Kbs
Next, we consider a variant of KBS that samples from a normal (Gaussian) distribution centered on the chosen bit in the first string to select the bit in the second parent. This ‘derandomizes’ KBS, making it more of a uniformtype crossover than a mutation operator as, due to lower variance, the selected bit in the second parent is likely to be closer to the original bit. Note that if this normal distribution has 0 variance then this becomes a standard crossover. Empirically, we found the variance of 4 to be a good choice, which is used in the experimental part of the article.
One drawback of this algorithm is that the gene location chosen in the second parent can be outside the allowable range (less than 0 or greater than
). In this case we have chosen to replace any outlier values with either 0 or
. This is computationally simpler than redrawing another random number, but can bias the selection of genes towards these two values. We will address this problem in our future work.4 Experimental Investigation
4.1 Test Suite
In this paper we only concern ourself with unconstrained multidimensional multimodal (except the Paraboloid function) nonlinear problems. The problem is formulated as:
We use a range of test functions from the BBOB2013 (see [9]) list of noiseless functions. Most are wellknown examples of minimization problems. Unless otherwise mentioned, the global minimum of the function is at
. All bold values are vectors, i.e.
, is a square of a vector norm, . In the experiments reported in the next section we use values of in the set . Paraboloid

This is the only unimodal function we use
 Rastrigin

This function has local optima.
 Rosenbrock

with the global minimum at (where denotes a vector consisting entirely of 1s).
 Schwefel

with the global solution at
 Ackley

 Griewangk

4.2 Means Clustering problem
The
means clustering problem is a very wellknown NPhard problem that arises in many machine learning and signal processing applications. Given a set of
vectors , the problem is to find a partition of the data into clusters (), each represented by the centroid of the data in that partition (), the mean, such that the sum of some metric distances, typically the norm, between all data points and the nearest centroid is minimized. In this paper we use Euclidean distance metric:(1) 
Here we use to denote the set of all centroids (means), and as defined above and is the number of vectors/observations in the
subset. There are many local minima in the space, and the standard iterative algorithm (Lloyd’s algorithm) typically finds a good one. There has been a lot of interest in heuristic algorithms solving this problem; some wellknown implementations include
[17, 12, 31]. One of the bestknown examples of means optimized by a GA is [18].In our implementation, we consider only the 4means problem (), and generate a set of random points for each dimension and use the same data for all of the algorithms to allow fair comparison. We also report results of using Lloyd’s algorithm using the implementation in the Scikit machine learning library for Python.
4.3 Experimental Setup
Each experiment consists of running every algorithm 20 times (from different randomized starting conditions) on a particular function with a prechosen dimensionality for 5000 generations. For simplicity, all algorithms save a single elite individual at each generation (i.e., the fittest individual is saved at every generation and added to the next generation of individuals, at the same time a randomly chosen offspring is deleted from the pool; thus population size is constant), and we do not vary the size of the population or the recombination pool, which were both fixed at 400 (making this a EA). Tournament selection was used to select individuals for the recombination pool (see [28] for its description). As well as KBS, we also used two variants of crossover that are wellknown in the RCGA literature, BLX and SBX. For all recombination operators we use a rate (recombine all pairs in the pool). We also applied two different mutation operators, uniform and Gaussian. In both cases bit mutation probability of was used, so that on the average there was one mutation per string. For the KBS both and were set to 0.4, and . These values were chosen to make the changes introduced by the KBS operator comparable to those of other crossovers. All values are initiated uniformly at random in the interval specified in Section4.1. In addition to BLX crossover explained in 2, we use Simulated Binary Crossover (SBX, see [5, 1]):
For our experiments we set . We also use the following mutation operators in combination (to reduce bias) with crossovers:

Gaussian mutation (GM, see [33]): in our implementation we select a bit with probability and sample from the normal distribution to generate the new value:
where is the maximal value of the bit and is the minimal value.
We defined success as finding the global minimum to within a tolerance of . We set for 2dimensional problems , and for all of the other problems. If the algorithm was successful, then the generation at which it first happened was recorded. The three measures that we record to establish the efficiency of the algorithm solving problem (where is run out of the total of runs, is the best solution, and is the number of generations it took the algorithm to find the solution) are:
(2)  
(3)  
(4) 
4.4 Experimental Results
The main results of this article are summarized in Tables 1, 2 and 3. In Table 1 in every cell (crossover and mutation types vs function and dimension) the first value is the proportion of successful runs (Equation 3). The second value is mean runtime averaged (applies to successful runs only), computed using Equation 4. A run is considered successful if the tolerance is reached. Tables 2 and 3 simply display the mean fitness value at the end of the run, averaged over 20 runs.
By looking at Tables 1 and 2 we get a good general overview of the performance of algorithms on blackbox optimization problems: we know which one does better on which function and can compare rate of convergence, since the faster the algorithm detects the basin of attraction around the global optimum, the better. Further analysis enables us to identify places where the operators tend to lead to convergence to local minima; for example both KBS operators appear to suffer from this (in 10 or more dimensions) for the Rosenbrock and Schwefel function. In contrast, both BLX and SBX suffer from slow convergence on these problems: it takes them much longer to explore the landscape segments with promising solutions.
It is clear from Tables 1  3 that most 2dimensional problems were very simple, but in most standard optimization problems, KBS algorithms outperform the mainstream crossovers, possibly because they exploit current knowledge more effectively. They do especially well on 20 and 50dimensional Ackley and Griewangk functions, which we see as a useful contribution to RCGA development if we compare the results, for example, to those in [10].
By looking at the 4means clustering problem (all dimensions), the results are reversed: KBS does worse compared to both BLX and SBX, quickly converging to a local solution and not being able to jump out of it. Although other operators also converge prematurely, this takes rather longer than for KBS, resulting in better solutions. As with the Schwefel function, we attribute this to the lack of exploration capacity of both KBS operators, something that needs to be fixed in the future work.
Nevertheless, all presented algorithms greatly outperform the kmeans algorithm in SciKit module for Python, as is shown in Table
3. One surprising result we encountered with Griewangk function (dimensions 2 and 5): Gaussian mutation improved all results by up to a factor of 20, something we did not observe with any other function in our experimentation. Although we attribute it to the structure of the fitness landscape, other factors may be at play that require additional investigation.Function  Dims  KBS+  KBS+  BLX+  SBX+  
SM  GM  SM  GM  SM  GM  SM  GM  
Paraboloid  2  1  7.2  1  4.75  1  6.95  1  4.5  1  13.7  1  6.35  1  13.65  1  5.75 
5  1  44.8  1  17.2  1  45.3  1  18.5  1  1643.8  1  233.15  1  120.15  1  60.2  
10  1  134.65  1  53.7  1  100.15  1  46.75  0    0    0.8  2563  1  1250  
20  1  272.45  1  94.55  1  382.55  1  99.35  0    0    0    0    
50  1  746  1  240  1  866  1  233  0    0    0    0    
Rosenbrock  2  1  20.5  1  25  1  32.4  1  27.45  1  46.1  1  38.6  1  35.45  1  31.15 
5  0.05  2825  0    0.05  4546  0.05  1424  0    0    0.05  4816  0    
10  0    0    0    0    0    0    0    0    
20  0    0    0    0    0    0    0    0    
50  0    0    0    0    0    0    0    0    
Ackley  2  0.95  2450  1  1050  1  1930  1  1335  1  3995  0.35  2155  0.15  1658  0.85  2180 
5  1  1431  1  571  1  1307  1  583  0    0    0    0.25  3422  
10  0.6  3324  1  2381  1  3211  0.05  1565  0    0    0    0    
20  0.25  3849  0.9  2536  0.25  3981  0.75  2320  0    0    0    0    
50  0.25  3652  0.95  2174  1  3991  0.7  2846  0    0    0    0    
Rastrigin  2  1  183  1  143  1  176  1  138  1  546  1  424  1  301  1  350 
5  1  685  1  726  1  594  1  655  0    0    0    0    
10  0.45  3303  0.7  3111  0.6  2872  0.75  2538  0    0    0    0    
20  0.1  4064  0.05  4196  0    0.1  4255  0    0    0    0    
50  0    0    0    0    0    0    0    0    
Schwefel  2  1  41  1  17  1  23  1  20  1  34  1  30  1  36  1  23 
5  0.05  1879  0.05  1541  0.1  2144  0    0    0    0    0.05  3864  
10  0    0    0    0    0    0    0    0    
20  0    0    0    0    0    0    0    0    
50  0    0    0    0    0    0    0    0    
Griewangk  2  1  1388  1  44.4  1  1496  1  38  1  2362  1  102  .8  1610  1  65 
5  0.9  2312  1  127  0.95  1970  1  105  0    0.95  1445  0    1  1224  
10  0    0.85  1385  0    1  1009  0    0    0    0    
20  0.05  4361  0.8  1187  0    0.7  961  0    0    0    0    
50  0.05  4830  1  862  0    1  757  0    0    0    0   
Function  Dims  KBS+  KBS+  BLX +  SBX +  

SM  GM  SM  GM  SM  GM  SM  GM  
Paraboloid  2  3.63e7  1.37e7  2.98e7  1.83e7  2.00e5  5.23e6  7.21e6  2.10e6 
5  1.13e5  1.91e6  3.26e6  8.49e7  0.045  0.018  0.003  0.0016  
10  0.0008  0.00025  0.0004  0.00016  0.875  0.373  0.07  0.03  
20  0.0022  0.00109  0.0049  0.001  8.102  3.616  0.67  0.31  
50  0.0078  0.003  0.012  0.006  80.76  48.16  7.78  3.87  
Rosenbrock  2  2.84e5  1.50e5  4.40e5  3.20e5  7.47e5  5.66e5  2.80e5  2.54e5 
5  0.49  1.043  0.83  0.80  2.48  2.018  1.53  1.20  
10  8.32  8.32  8.03  8.06  35.99  31.30  14.43  9.62  
20  18.86  18.78  18.97  18.74  403.57  330.97  82.83  64.14  
50  48.91  48.79  48.97  48.86  7606.79  6353.38  572.33  481.14  
Ackley  2  0.003  9.66e4  0.003  0.002  0.05  0.01  0.02  0.006 
5  0.008  0.003  0.007  0.003  2.59  0.90  0.72  0.17  
10  0.11  0.035  0.07  0.019  5.11  2.84  2.42  0.76  
20  0.164  0.046  0.18  0.06  8.30  4.46  3.91  2.39  
50  0.165  0.044  0.25  0.08  12.62  7.76  6.41  4.03  
Rastrigin  2  1.73e5  2.42e5  7.42e6  1.01e5  1e3  5.19e4  5.15e4  3.71e4 
5  2.99e4  1.65e4  1.05e4  1.02e4  2.67  2.22  1.32  0.92  
10  0.62  0.44  0.33  0.16  22.83  20.08  14.70  10.63  
20  2.72  2.32  3.38  2.56  89.97  83.62  60.24  59.30  
50  9.93  3.28  10.095  4.46  376.58  366.07  315.46  301.69  
Schwefel  2  2.85e5  3.03e5  2.2e5  2.8e5  7.2e5  5.87e5  4.2e5  2.5e5 
5  0.67  0.73  0.75  0.81  2.39  2.103  1.27  1.21  
10  8.33  8.18  8.13  8.02  37.59  30.4  10.15  9.71  
20  18.84  18.84  18.85  18.80  424.11  310.51  83.05  67.81  
50  49.13  48.82  49.22  48.80  7678  6104  553.90  501.33  
Griewangk  2  0.003  1.35e5  0.004  3.26e6  0.01  2.16e4  0.007  1e4 
5  0.05  0.005  0.051  0.009  0.62  0.07  0.37  0.04  
10  0.49  0.03  0.42  0.02  1.76  0.47  1.04  0.35  
20  0.57  0.05  0.74  0.02  8.27  1.05  1.63  0.72  
50  0.59  0.007  0.78  0.01  71.18  1.75  7.49  1.07 
Function  Dims  KBS  KBS  KBS  KBS  BLX  BLX  SBX  SBX  SciKit 

+SM  +GM  +SM  +GM  +SM  +GM  +SM  +GM  
4MeansClustering  2  10.54  10.44  10.35  10.51  8.63  8.63  8.62  8.62  10.31 
5  37.05  37.24  36.97  37.002  31.51  31.54  31.37  31.40  64.07  
10  61.44  61.61  61.33  61.23  50.20  50.17  49.78  49.94  167.52  
20  97.29  97.14  96.96  97.15  80.58  80.55  83.62  83.28  369.13  
50  159.24  159.35  159.25  159.24  143.47  142.37  150.87  148.77  1063.29 
Having established the absolute performance of algorithms, we now rank them based on these results. As our data is not Gaussiandistributed or heteroscedastic, we have chosen to use nonparametric statistical tests, specifically the MannWhitney U statistic (also known as the onesided nonparametric Wilcoxon ranksum test), which is defined as:
where are sample sizes (equal in our case) and
are the sums of ranks in each sample. A sample size of 20 (which is our case) is enough to give a reliable estimate of the statistical significance of the difference of the means. The test returns a zstatistic from a normal distribution and
value that is compared to the significance level of . The sign of the zstatistic shows which sample’s mean is smaller: () for the first and (+) for the second.For reasons of space the particular values are not reported, but the main results we obtain from this analysis can be summarized as follows:

On the overwhelming majority of the Functional Optimization tasks, KBSbased algorithms outperform both BLX and SBX, and this difference is statistically significant (i.e., systematic). Among the exceptions are 2dimensional Rosenbrock and Schwefel and 5dimensional Griewangk function, where SBX+GM is equally efficient, because the Ustatistic values are not significantly different from 0.

The 4means clustering problem is best approximated (the global solution is, of course, unknown) by SBXbased algorithms (up to ) and BLX algorithms (for dimensionality ).

Gaussian mutation improves working on many instances (but never on the 4means problem). This is true for each of the 4 types of crossover. It boosts performance especially well on the Griewangk test functions (all dimensions)

The two variants of the KBS recombination operator are almost equally good (most differences are not statistically significant). Out of 35 instances (7 functions 5 dimensions) KBS+SM outperforms KBS+SM in 4 instances, KBS+SM outperforms KBS+SM in 5 instances, KBS+GM outperforms KBS+GM in 6 instances and KBS+GM outperforms +GM in 3 (the rest are not statistically significant). Overall, has a slight advantage over the other variant.
Overall, we attribute the relative underperformance of BLX and SBX operators to our choice of elitism and selection function that prevent successful exploitation of promising fitness basins. We intend to address this problem in our future work.
5 Conclusions and Future Work
In this article we have presented and tested a new recombination operator for RCGAs, a variant of the KBitSwap that shares certain features with AX and BLX crossovers and Gaussian mutation. The principal difference is that in the KBS operator the locations of the bits selected in the two strings do not match, but are chosen randomly. We have considered two versions: chosen the two sites both uniformly at random, and using a normal distribution centered on the selected bit in the first string.
Both KBS operators have been shown to be superior for functional optimization problems to both BLX and SBX crossovers, but underperform on the 4means approximation problem.
We also looked into some theoretical properties of presented operators. KBS samples different genes in both strings, thus slightly compensating for the absence of an exploration bias. If we consider uniform crossover and a simple 1bit mutation (a standard choice for many applications), it is clear that fairly quickly the bits in the parents, and will be close, and even if we construct a certain interval around these values (as in BLX crossover), we can easily get stuck in some unpromising fitness region. KBS offers a workaround: although new values are not sampled from outside of the interval between the two parents, the second parent’s value may be very different from the value in the same feature, thus it mimics exploration ability. Since the location selection for KBS is not restricted to the current feature, even if other features have already converged to the local optima, KBS has a relatively high probability of selecting a good schema and sampling in the area close to the optimal solution. Compared to the BLX operator new values lie strictly between the values selected in the parents. Therefore, both variants of the KBS operator are heavily biased towards exploitation rather than further exploration of the search space (see [8, 11, 22]).
The logical next step would be to enhance the operator with an interval or other features that would enable generation of values outside of this interval. Also, to explore KBS properties further, we intend to study the selection pressure mechanisms that pushes evolution towards areas with highquality schemata (in this article a simple tournament selection was used) and more sophisticated elitism functions (instead of the single fittest string). We believe that working along these lines will help improve performance of RCGAs on multidimensional and multimodal functions.
References
 [1] R. B. Agrawal, K. Deb, and R. B. Agrawal. Simulated binary crossover for continuous search space. 1994.

[2]
A. Blanco, M. Delgado, and M. Pegalajar.
A realcoded genetic algorithm for training recurrent neural networks.
Neural networks, 14(1):93–105, 2001.  [3] M. Črepinšek, S.H. Liu, and M. Mernik. Exploration and exploitation in evolutionary algorithms: a survey. ACM Computing Surveys (CSUR), 45(3):35, 2013.
 [4] K. Deb, A. Anand, and D. Joshi. A computationally efficient evolutionary algorithm for realparameter optimization. Evolutionary computation, 10(4):371–395, 2002.
 [5] K. Deb and H.g. Beyer. Selfadaptive genetic algorithms with simulated binary crossover. In Complex Systems. Citeseer, 1995.
 [6] B. Doerr, E. Happ, and C. Klein. Crossover can provably be useful in evolutionary computation. In Proceedings of the 10th annual conference on Genetic and evolutionary computation, pages 539–546. ACM, 2008.
 [7] S. M. Elsayed, R. Sarker, D. L. Essam, et al. Ga with a new multiparent crossover for solving ieeecec2011 competition problems. In Evolutionary Computation (CEC), 2011 IEEE Congress on, pages 1034–1040. IEEE, 2011.
 [8] L. J. Eshelman. Realcoded genetic algorithms and intervalschemata. Foundations of genetic algorithms, 2:187–202, 1993.
 [9] S. Finck, N. Hansen, R. Ros, and A. Auger. Realparameter blackbox optimization benchmarking 2010: Presentation of the noisless functions. Technical report, Citeseer, 2010.
 [10] S. García, D. Molina, M. Lozano, and F. Herrera. A study on the use of nonparametric tests for analyzing the evolutionary algorithms’ behaviour: a case study on the cec’2005 special session on real parameter optimization. Journal of Heuristics, 15(6):617–644, 2009.
 [11] D. E. Goldberg. Genetic algorithms in search, optimization, and machine learning, 1989. ISBN: 0201157675, 1989.
 [12] J. A. Hartigan and M. A. Wong. Algorithm as 136: A kmeans clustering algorithm. Applied statistics, pages 100–108, 1979.

[13]
F. Herrera, M. Lozano, E. Pérez, A. M. Sánchez, and P. Villar.
Multiple crossover per couple with selection of the two best
offspring: an experimental study with the blx crossover operator for
realcoded genetic algorithms.
In
Advances in Artificial Intelligence
, pages 392–401. Springer, 2002.  [14] F. Herrera, M. Lozano, and A. M. Sánchez. A taxonomy for the crossover operator for realcoded genetic algorithms: An experimental study. International Journal of Intelligent Systems, 18(3):309–338, 2003.
 [15] F. Herrera, M. Lozano, and A. M. Sánchez. Hybrid crossover operators for realcoded genetic algorithms: an experimental study. Soft Computing, 9(4):280–298, 2005.
 [16] J. H. Holland. Adaptation in natural and artificial systems: an introductory analysis with applications to biology, control, and artificial intelligence. U Michigan Press, 1975.
 [17] T. Kanungo, D. M. Mount, N. S. Netanyahu, C. D. Piatko, R. Silverman, and A. Y. Wu. An efficient kmeans clustering algorithm: Analysis and implementation. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 24(7):881–892, 2002.
 [18] K. Krishna and M. N. Murty. Genetic kmeans algorithm. Systems, Man, and Cybernetics, Part B: Cybernetics, IEEE Transactions on, 29(3):433–439, 1999.
 [19] S.H. Ling and F. F. Leung. An improved genetic algorithm with averagebound crossover and wavelet mutation operations. Soft Computing, 11(1):7–31, 2007.
 [20] Z. Michalewicz. Genetic algorithms+ data structures= evolution programs. Springer Science & Business Media, 2013.
 [21] Z. Michalewicz and C. Z. Janikow. Handling constraints in genetic algorithms. In ICGA, pages 151–157, 1991.
 [22] M. Mitchell. An introduction to genetic algorithms, 1996. PHI Pvt. Ltd., New Delhi, 1996.
 [23] M. Mitchell, S. Forrest, and J. H. Holland. The royal road for genetic algorithms: Fitness landscapes and ga performance. In Proceedings of the first european conference on artificial life, pages 245–254. Cambridge: The MIT Press, 1992.
 [24] T. Pencheva, M. Angelova, K. Atanassov, and P. Vasant. Genetic algorithms quality assessment implementing intuitionistic fuzzy logic. Handbook of Research on Novel Soft Computing Intelligent Algorithms: Theory and Practical Applications, page 327, 2013.
 [25] O. Roeva, T. Slavov, and S. Fidanova. Populationbased vs. single point search metaheuristics for a pid controller tuning. Handbook of Research on Novel Soft Computing Intelligent Algorithms: Theory and Practical Applications, P. Vasant (Ed.), IGI Global, pages 200–233, 2013.
 [26] W. M. Spears et al. Crossover or mutation. Foundations of genetic algorithms, 2:221–237, 1992.
 [27] M. Srinivas and L. M. Patnaik. Adaptive probabilities of crossover and mutation in genetic algorithms. Systems, Man and Cybernetics, IEEE Transactions on, 24(4):656–667, 1994.
 [28] A. TerSarkisov. Computational Complexity of Elitist PopulationBased Evolutionary Algorithms. PhD thesis, Massey University, 2012.
 [29] A. TerSarkisov, S. Marsland, and B. Holland. The kBitSwap: A New Genetic Algorithm Operator. In Genetic and Evolutionary Computing Conference (GECCO) 2010, pages 815–816, 2010.
 [30] M. Thakur. A new genetic algorithm for global optimization of multimodal continuous functions. Journal of Computational Science, 2013.
 [31] K. Wagstaff, C. Cardie, S. Rogers, S. Schrödl, et al. Constrained kmeans clustering with background knowledge. In ICML, volume 1, pages 577–584, 2001.
 [32] A. H. Wright et al. Genetic algorithms for real parameter qptimization. In FOGA, pages 205–218. Citeseer, 1990.
 [33] Y. Yoon and Y.H. Kim. The roles of crossover and mutation in realcoded genetic algorithms. BioInspired Computational Algorithms and Their Applications, InTech, 2012.
 [34] Y. Yoon and Y.H. Kim. Geometricity of genetic operators for realcoded representation. Applied Mathematics and Computation, 219(23):10915–10927, 2013.
Comments
There are no comments yet.