We are almost ready to leave our discussion of the feature net. In closing though, we need to work through two last points-first, the nature of the “knowledge” built into the network and, second, the broader question of why the network should function as it does. We’ve seen many indications that, somehow, knowledge of spelling patterns is ”built into” the network. For example, knowledge that “CO” is a common digram in English, while “CF” is not, is built into the network by virtue of the fact that the CO-detector has a higher baseline activation than the CF-detector. As a result, it’s literally true that the system is better-prepared for one of these patterns than for the other. In a way, it seems as if the system “expects” one of these patterns (“CO”) to appear often, but has no such expectation for the other (”CF”). However, this ”expectation” is an entirely passive one-built into the activation levels (and therefore the preparedness) of the net.
The sense in which the net “knows” these facts about spelling is worth emphasizing, since we will return to this idea in later chapters. This knowledge is not explicitly stored anywhere. Nowhere within the net is there a sentence like ”CO is a common digram in English; CF is not.” Instead. this memory, if we even want to call it that. is manifest only in the fact that the CO-detector happens to be more primed than the CF-detector.
For more information on this topic and other articles, check out my profile page.
Did you like this article? You can write articles like this and make money from it. It is free to join and you can make money online as soon as you sign-up. Click on the link to Sign-up with Bukisa.com and starting making some good money on the internet.
More Content on the Web by Spill Guy: