Jump to content

-->
  • Log in with Facebook Log in with Twitter Log In with Google      Sign In   
  • Create Account


Adverts help to support the work of this non-profit organisation. To go ad-free join as a Member.


Photo
- - - - -

A Brief Overview of the Infoglut


  • Please log in to reply
1 reply to this topic

#1 celindra

celindra
  • Guest
  • 43 posts
  • 0
  • Location:Saint Joseph, TN

Posted 22 May 2003 - 07:55 PM

A Brief Overview of the Infoglut
Information overload as a barrier to Transhuman technology

Posted Image
by Michael Haislip

Any resident of an industrialized nation experiences the phenomenon of information overload, the sometimes-overwhelming experience of too much incoming information. It is nearly impossible to grasp the sheer amount of data that flows through modern society. In his book Information Anxiety, Richard Wurman estimates that a single weekday edition of The New York Times “contains more information than the average person was likely to come across in a lifetime in seventeenth-century England.” Our society daily generates trillions of data pieces. No one person can reasonably hope to process even a fraction of that data stream.

Whither the cure for cancer?

It is probable that infoglut is already slowing scientific advances. The sheer amount of existing research presents a formidable barrier to scientists in all fields. John Naisbitt writes in Megatrends “some scientists claim it takes less time to do an experiment than to find out whether or not it has been done before.” Clearly, this meta-research (the act of researching research) distracts from the goal of progress.

The problem, although greatly magnified since the advent of the Internet, stretches back several decades. In 1966, information analyst Hubert Murray estimated that “approximately 20,000,000 words of technical information” was being recorded in every 24-hour period. That estimate was made almost four decades ago. One can only imagine the increased output rate that has occurred since then.

Compounding the matter is the lack of available research from Third World nations. Frequently, the research produced in the Third World is never published in major citation services such as the Science Citation Index (SCI), a primary source for locating citable research. Instead, non-Western data is relegated to obscure foreign journals. Thus, the majority of scientists will never learn of the research. Western scientists are overloaded with data from their own countries and cannot reasonably be expected to keep abreast of research from other nations, especially that research which is published in a foreign language.

In the August 1995 issue of Scientific American, a Mexican doctor said of his cholera studies, "Our researchers have interesting findings about some new strains. International journals refuse our papers because they don't consider cholera a hot topic. But what if these strains spread across the border to Texas and California. They will think it important then. Meanwhile the previous knowledge about the disease will have been lost. Scientists searching the literature will not find the papers published in Mexico Journals, because they are not indexed." What other advances have been made in the Third World that are unknown to the West? Perhaps the cure for cancer has already been published by a doctor in Zimbabwe, while Western science pushes on, oblivious to its existence.

Data filtering in human-level intelligence

Current cognitive research suggests that humans handle incoming information using a combination of positive pattern matching (e.g. “This data looks like previous useful data, so I will process it.”), negative pattern matching (e.g. “This data looks like previous non-useful data, so I will ignore it.”) and “pre-wired” neurological limitations. One of these limitations is the fact that humans can only hold a very limited amount of data in their active memory, forcing these active data “chunks” to either be stored in memory or deleted to make room for more information. The maximum amount of active data chunks varies from seven to thirteen based upon which cognitive scientist you believe.

An artificial intelligence (AI) with at least human-equivalent cognitive powers would be able to handle larger amounts of data chunks, assuming it was designed to do so. The maximum amount of chunks would only be limited by memory size and bus speed. Thus, an average human can hold a single phone number in active memory while an AI could easily hold an entire phone book in active memory.

Data organization may not be the key

Many would suggest that better data organization is the key to handling information overload. At human-equivalent mental capacity, this approach seems to be effective. In fact, it is used everyday.

We have Internet directories such as the Open Directory Project which organize data into categories. We consult the yellow pages in the phone book when we need a plumber or an accountant.

However, humans begin to run into problems when they have to organize these organizations. A mental model must be created to categorize the sub-models--a meta-model of sorts. For example, your handy local phone book represents a first-level data organization. Now let’s say you have multiple phone books from different areas. To effectively use the collection of books, you must create another mental model, this one to classify the already-existing sub-models. Thus, each time a new model is created, another piece of data is added to the glut. Another layer of complexity is added.

Posted Image
Figure 1 -- First-level organizations can only be categorized by adding a second layer of complexity, thus adding to the infoglut.

Conclusions

We can only estimate that a significant amount of advancements are lost due to inefficient data processing. Although this is disheartening to those seeking faster development of technology, some short-term solutions can be utilized:
  • Apply current information management techniques more widely, such as hypertext and human-edited directory services
  • Encourage efficient use of external memory aids, such as Palm Pilots
  • Provide translation services for foreign language research journals
  • Improve search algorithms and add natural language querying ability

Within the next decade, I expect to see the following solutions to emerge:
  • Widespread use of Bayesian data filtering and prediction to lessen the information load upon users
  • Widespread use of sub-intelligent and intelligent agents, advanced programs which act independently from the user without requiring constant input and direction
  • The rise of the Semantic Web, which, according to its developers, "is an extension of the current web in which information is given well-defined meaning, better enabling computers and people to work in cooperation"

A user-friendly design is the key to improved information management. Existing methods focus mainly on the data and its presentation, while often ignoring the needs of the user. Information should be a tool, not a burden. It must be subjugated to the user's goal.

#2 Sophianic

Sophianic

    Immortality

  • Guest
  • 197 posts
  • 2
  • Location:Canada

Posted 22 May 2003 - 11:31 PM

A user-friendly design is the key to improved information management. Existing methods focus mainly on the data and its presentation, while often ignoring the needs of the user. Information should be a tool, not a burden. It must be subjugated to the user's goal.

Agreed. It'll be interesting to see how our relationship to information evolves over the next ten to twenty years with the introduction of increasingly sophisticated AI and the merger of biological and machine intelligence (or will it be more like a partnership by proxy?), as the generation and proliferation of information continues to accelerate. One can imagine AI servants (designed specifically for that purpose) tailored to reflect the abilities, and cater to the preferences of, its human sponsors. The social implications are provocative (and potentially explosive) to say the least.

One also wonders how identity-critical information would be gathered and stored (and deliberately and selectively erased/released through time?) over thousands/millions of years of living (assuming a futuristic, immortalist perspective). Or would would-be immortals plug themselves into, and have quick and easy access to, larger reservoirs of information than their own for their daily needs as if in a symbiotic relationship ~ both in relation to each other and with respect to the reservoirs themselves? Would this/these reservoir(s) also be sentient? Intelligent? Self-aware?

One must continue to wonder about the current and future trends of information processing, acquisition and distribution, relative to the practicalities of daily living and flourishing, but especially to our identities as persons in a quest for immortality ...




1 user(s) are reading this topic

0 members, 1 guests, 0 anonymous users