...continued from previous post

Perhaps we could limit and specialize this idea somewhat for some purposes by, instead of simply trying to find optimal algorithms for any given purpose, considering it a domain of constraints, to make something analogous to a really, really complex calculus problem.

For example, take image compression. Your constraints could be that the resultant image be not necessarily the original but not differ from it more than a certain amount, either per pixel, something like per wavelet, or as an average over the whole pic, or some combination of those, in a number of separate dimensions. One dimension would be a rendered color's distance from the original color within a particular ergonomic color space, one could be the presence of specific distorting artifacts, some could get into theory of the qualitative differences between an artificially stochastic aspect of an image and data that the rendered image is based on which we consider to be virtually random in specific aspects. Another constraint could be how close to optimal the solution must be, or we could even look for an optimal balance between compressed size and the time it takes to compute the compressed image, and/or the time it would take to decompress it.

We obviously don't have the tools necessary to even begin to evaluate such a complex quasi-mathematical problem, but we could, perhaps, explore methods of evaluating such constraint or optimality problems using the principle of exploring methods of evaluating problems, and so forth and so on, recursively.

We don't have to work strictly by starting from an enormously complicated constraint & optimality problem expressed formally and trying to evaluate that, either. We could perhaps gain some tools that are useful for solving problems on *both* ends of the recursive progression by starting with evaluations of simpler systems, then gradually working our way up, thus adding a whole other dimension to the learning curve. Some tools we might use, BTW, other than the various analytical problem-solving methods we know of from all of maths, may be artificial neural networks, particle swarms, and genetic algorithms. We could even end up developing new alternatives or variants to all those methods, by means of our new methodology.

Ultimately, there is no knowing what we might be able to do for civilization in this manner, since we may apply what we discover to optimizing systems across the board, from data compression, to CPU design, to aerodynamics, to new kinds of gasoline engines, to manufacturing processes to economics.

## Sunday, February 15, 2009

Subscribe to:
Posts (Atom)