Thursday, September 30, 2010

A different general purpose way of tackling complex problems

Well I'm back with a fury. This time I have a program that i find so much fun to program, I am changing my sleep schedule as a result, you will see soon I will have a new blog about Polyphasic sleeping.

Onto my program, right now (to my understanding) in artificial intelligence there are a variety of different types and ways to use "Neural networks".
In each case, they use a particular neuron and connection structure, and change the condition for which each neuron is activated

Stay with me here!

They do this so that it can be optimized with mathematics, and with maximizing they find a good solution, but it's near impossible to find the best solution. (There's more to this for those who understand, it finds the local minimum error, but not the best possible situation)

Well, turns out they can only do this optimization (this is to my understanding) if they already know what the program SHOULD output, which is easy for stockmarkets (because you can use past data), however NOT so good for learning games where we DONOT know the best possible move.

Well, for a situation like chess, or perhaps some other complicated problem, why would we even bother using the same neurons and neuron structure?! If we can't optimize, perhaps we can do something FAR more interesting

Consider this:
Change the very structure neural networks work. Bend it, twist it until it no longer even resembles a neural network, make varieties of different structures
Then have them compete against each other.
Let it soak in.....

What this implies:
Each unique neural network will be defined by a DNA strand that defines the structure of the neural network and each neuron. The best structures will be saved and will compete to be the best at the problem I want to solve

One more time... soak in it....

I can test to see how well a neural network learns! By doing this, I am no longer try to fit one problem with one structure. But I am FINDING structures that approach the problem the best!!!

The structures with the best DNA will be selected and compete against each other, this means the ones that LEARN THE BEST.

In the absolute best case scenario (unlikely, but let's fantasize):
I could find neural networks that work much better and much more efficiently than traditional neural networks. Not just for one problem, any DNA strand is capable of building a network to LEARN ANYTHING (how well it learns of course another story). So it's possible, that if a really good neural network is possible in my DNA, it will be able to learn better than any known neural network. I'm not making any promises of course!

I do have cautious optimism one it will be better at tackling an individual game than traditional neural networks. (however, please do keep in mind, my knowledge is limited :-) I do research, but I don't really have a teacher to ask questions to)

Final note:
Now, this program is actually capable of evolving into a traditional neural network (without optimization, but optimization is overrated anyway). However, with all these possible new neural networks NEVER BEFORE DISCOVERED! I doubt it will even be useful, because other structures are likely to fit a specific problem much better than a traditional one.

In my next entry, I will explore some of the awesome traits any neural network could possibly obtain and take advantage of.

Saturday, June 12, 2010

So how about this

Imagine neural networks.
okay well let's imagine we separate three neural networks

one decides what's the important input
one decides what that input should really be (replace 1 with 3's)
last makes the final decision

This is just a concept, I have no reason even imaginary reason to think why it would work, but it sounds fun.

It has been a long time since i've posted, i have have a few ideas like this and others
How about neural networks that first train by "learning the rules". One neural network that observes winning situations during training could vote on whether or not the move is a winning move, one can vote on whether or not the move is a losing move. And the third one can either 1. not be a neural network and just decide based on which move has the highest ranking. or 2. be a neural network that takes those decisions and that of the entirety of the board to make a final decision.

My idea right now that I'm actually working on is highly different from the two ideas mentioned above.
An individual 'neuron' in a neural network does three things.
1. gets input
2.adds all that input
3. compares the sum to a number aka "threshold"
4. sends out a number (aka "weight" )based on if the threshold was reached. this number is sent to another neuron or used to determine the final output.

well I ask, why not bend the rules?who said these rules were necessarily the best rules. I know one can optimize neural networks that function this way to find some local minimum of error. but perhaps some other organization is more flexible, and can find a better minimum faster and more consistently. more later