Thursday, September 30, 2010

A different general purpose way of tackling complex problems

Well I'm back with a fury. This time I have a program that i find so much fun to program, I am changing my sleep schedule as a result, you will see soon I will have a new blog about Polyphasic sleeping.

Onto my program, right now (to my understanding) in artificial intelligence there are a variety of different types and ways to use "Neural networks".
In each case, they use a particular neuron and connection structure, and change the condition for which each neuron is activated

Stay with me here!

They do this so that it can be optimized with mathematics, and with maximizing they find a good solution, but it's near impossible to find the best solution. (There's more to this for those who understand, it finds the local minimum error, but not the best possible situation)

Well, turns out they can only do this optimization (this is to my understanding) if they already know what the program SHOULD output, which is easy for stockmarkets (because you can use past data), however NOT so good for learning games where we DONOT know the best possible move.

Well, for a situation like chess, or perhaps some other complicated problem, why would we even bother using the same neurons and neuron structure?! If we can't optimize, perhaps we can do something FAR more interesting

Consider this:
Change the very structure neural networks work. Bend it, twist it until it no longer even resembles a neural network, make varieties of different structures
Then have them compete against each other.
Let it soak in.....

What this implies:
Each unique neural network will be defined by a DNA strand that defines the structure of the neural network and each neuron. The best structures will be saved and will compete to be the best at the problem I want to solve

One more time... soak in it....

I can test to see how well a neural network learns! By doing this, I am no longer try to fit one problem with one structure. But I am FINDING structures that approach the problem the best!!!

The structures with the best DNA will be selected and compete against each other, this means the ones that LEARN THE BEST.

In the absolute best case scenario (unlikely, but let's fantasize):
I could find neural networks that work much better and much more efficiently than traditional neural networks. Not just for one problem, any DNA strand is capable of building a network to LEARN ANYTHING (how well it learns of course another story). So it's possible, that if a really good neural network is possible in my DNA, it will be able to learn better than any known neural network. I'm not making any promises of course!

I do have cautious optimism one it will be better at tackling an individual game than traditional neural networks. (however, please do keep in mind, my knowledge is limited :-) I do research, but I don't really have a teacher to ask questions to)

Final note:
Now, this program is actually capable of evolving into a traditional neural network (without optimization, but optimization is overrated anyway). However, with all these possible new neural networks NEVER BEFORE DISCOVERED! I doubt it will even be useful, because other structures are likely to fit a specific problem much better than a traditional one.

In my next entry, I will explore some of the awesome traits any neural network could possibly obtain and take advantage of.

Saturday, June 12, 2010

So how about this

Imagine neural networks.
okay well let's imagine we separate three neural networks

one decides what's the important input
one decides what that input should really be (replace 1 with 3's)
last makes the final decision

This is just a concept, I have no reason even imaginary reason to think why it would work, but it sounds fun.

It has been a long time since i've posted, i have have a few ideas like this and others
How about neural networks that first train by "learning the rules". One neural network that observes winning situations during training could vote on whether or not the move is a winning move, one can vote on whether or not the move is a losing move. And the third one can either 1. not be a neural network and just decide based on which move has the highest ranking. or 2. be a neural network that takes those decisions and that of the entirety of the board to make a final decision.

My idea right now that I'm actually working on is highly different from the two ideas mentioned above.
An individual 'neuron' in a neural network does three things.
1. gets input
2.adds all that input
3. compares the sum to a number aka "threshold"
4. sends out a number (aka "weight" )based on if the threshold was reached. this number is sent to another neuron or used to determine the final output.

well I ask, why not bend the rules?who said these rules were necessarily the best rules. I know one can optimize neural networks that function this way to find some local minimum of error. but perhaps some other organization is more flexible, and can find a better minimum faster and more consistently. more later

Saturday, December 5, 2009

Somebody discouraged me, but I am back with a vengence

It has been three months since my last post. Since then I have slowed down due to college. I need to keep up with this blog for my sake!!

So in three months what progress has been made??

My idea (as i've researched and found out) is a more complicated neural network. This does not necessarily mean it will be more effective, though that would be pretty sweet.

So here's what's going on
The input will always be a list of numbers representing the environment to the algorithm.
The Network, using its prewritten rules logic, will take the information bend it, twist it, multiply some stuff and get a result

The result will always be one number such as:

The environment will be programmed to use numbers in a certain way, preferably in a way that would be easy for the algorithm to adapt to later on.

NOW what's REALLY important is how we get from an array of numbers, to a single digit.

There is a huge map of 'neurons', these 'neurons' take in a certain amount of input, multiple numbers or maybe only one number, they do some logic and add some numbers in some almost random way (it changes over time, that's how i can find the best one), and then outputs ONE number.

Many of these things are connected to one another, the output from one neuron can be used by any other neuron that wants to know the variable, then spits out one number.

In the end, there is one deciding neuron that gives out the final output.

Many of these structures compete with each other to play a game. If they win more than others, they get to have babies, who are exactly like the parents, but with a few mutations. Then all the children and all the parents fight in the game, the best are selected, and the whole process moves over and over again.

Tomorrow: weaknesses, and how to make this better

Saturday, September 19, 2009

Proof 2: Genetic algorithms, selection and fitness

Before I continue, I would like to define genetic algorithms to everybody just one more time...

Genetic algorithms try to replicated darwin's theory of evolution. problem is, we havent exactly evolved a human out of a computer yet... I wanna do that

Fitness: how well an organism does in its environment

I am trying to prove something... not quite sure yet, but this will probably happen a lot so I will be posting my previous remarks with this link. So if some number shows up, they should show up somewhere in these blogs below
Link to other proof post:


Proof 1: something else is needed....

Assume we are using a genetic algorithm, we want the perfect strategy in a game of 'Go'
'go' has way too many ways to play it for a computer to calculate (just consider it too many to be practical to solve)

1. the perfect strategy beats all other strategies
2. If you learn from experience alone, you may only be equiped to handle experiences you have been exposed to
3. Selection is made by the ability to handle experiences
4. ability to handle experience depends on genes
5. which genes have survived, depends on what events or experiences has lead to that point
6. genes represent past experience (3,4,5)
7. if you only learn by experiences, you must be exposed to all experiences to become perfect (1,6)
8. to become perfect more than experience is needed OR nearly all experiences must be encountered.
(assumed there was too many experiences)
9. more than experience is needed (8+assumption)
10. genes are not enough OR genes must represent more than experience (edit) OR perfection is impossible, maximizing is necessary

Friday, September 18, 2009

Balance and perfection

Basic genetic algorithms are very simple in my opinion. However, it has become apparent that the way they are tested and selected can determine how well the outcome is. Naturally, one might think that simply choosing the best based on how it faces against the rest of the population, or some standard.

The problem with choosing one standard for the program to beat, is that the single standard may not be as good as it can get. Perhaps there are other scenarios that the algorithms will not be able to handle that were never presented.

To fix this problem I decided to make the population play against each other, that way they are exposed to many strategies.... but there is a much more interesting problem with this

If the population fight against itself, there is the possibility it will keep going until something reaches the perfect algorithm, and then the gene pool is dominated by it.

However, there is a much much much more probable scenario.
Perhaps the system will improve consistently, but only until the system reaches Balance. what if it gets to the point where all the algorithms seem to do equally well as each other, not exactly, but enough to significantly stall the process. for instance, some strategies may develop that are counters to other organisms strategies, but these strategies are all intertwined in some simple or not so simple order. These strategies will always beat each other out and play out in the same way. An equilibrium will be established, nothing will truly improve as one cannot say which of these strategies are truly 'better'

There has to be a way to shake up the balance, perhaps my random idea will be enough, but I must consider the probable fact that it will not. What can make the balance fall apart without abandoning strategies completely?

Thursday, September 17, 2009

how neatly done!

Typical genetic algorithm:
specific traits are chosen very carefully, as the computer scientist approaches the problem. A specific string of binary numbers is chosen to represent his creation. These numbers will mutate to perform the task at hand. In order for this scientist to do so, he must test how well each binary code will do in the given scenario, so that the scientist may more carefully decide which one is correct.

There is tedious calculation, each may be given multiple scenarios, then ranked on how well each binary code performs. They use advanced math to decide how each one should be chosen, and how many children each binary string should have, or whether they should die.

Then when the scientist runs this program, something happens. There is a 'super subject' but not in a good way. This subject is not getting better, but it's gene's are dominating the gene pool. This super subject is not as perfect as we would like, so much more mathematical calibration must be done.

Well, what if the scientist did something else....
A lot of times in real life, some things that happen seemingly randomly give us an advantage. While over time the strongest survive, some of the weaker ones survive and keep traits that may become useful. We need the weak and the strong, and some way to choose between them.

Well, what if they competed against each other, and we recorded how many times they win. However, not all of them will necessarily get to play the same amount, each one is randomly chosen as a pair to fight, then on to the next pair. There are a lot of pairings, and yes it is a bit random, but those who are better over time will succeed....

let's see what happens!!!