Hi..
I AM CHARLESGEORGE.I live in cochin.I had my BTECH(MECHANICAL 2005 - 09) in MBC PEERMADE,KERALA.I WILL HELP U ALL TO GET YOU THE LATEST SEMINAR OR PROJECT TOPICS.IF U WANT TO CONTACT ME PLESE MAIL at charlesgeorgeg@gmail.com

LOOK HERE FOR YOUR SEMINAR & PROJECT?

Custom Search

Monday, January 25, 2010

WarDriving

Wardriving is searching for Wi-Fi wireless networks by moving vehicle. It involves using a car or truck and a Wi-Fi-equipped computer, such as a laptop or a PDA, to detect the networks. It was also known (as of 2002) as WiLDing (Wireless Lan Driving, although this term never gained any popularity and is no longer used), originating in the San Francisco Bay Area with the Bay Area Wireless Users Group (BAWUG). It is similar to using a scanner for radio.

Many wardrivers use GPS devices to measure the location of the network find and log it on a website (the most popular is WiGLE). For better range, antennas are built or bought, and vary from omnidirectional to highly directional. Software for wardriving is freely available on the Internet, notably, NetStumbler for Windows, Kismet for Linux, and KisMac for Macintosh.

Wardriving was named after wardialing (popularized in the Matthew Broderick movie WarGames) because it also involves searching for computer systems with software that would use a phone modem to dial numbers sequentially and see which ones were connected to a fax machine or computer, or similar device.

Boids

Boids, developed by Craig Reynolds in 1986, is an artificial life program, simulating the flocking behaviour of birds.

As with most artificial life simulations, Boids is an example of emergent behaviour; that is, the complexity of Boids arises from the interaction of individual agents (the boids, in this case) adhering to a set of simple rules. The rules applied in the simplest Boids world are as follows:

  • separation: steer to avoid crowding local flockmates
  • alignment: steer towards the average heading of local flockmates
  • cohesion: steer to move toward the average position of local flockmates

More complex rules can be added, such as obstacle avoidance and goal seeking.

The movement of Boids can either be characterized as chaotic (splitting groups and wild behaviour) or orderly. Unexpected behaviours, such as splitting flocks and reuniting after avoiding obstacles, can be considered emergent.

The boids framework is often used in computer graphics, providing realistic-looking representations of flocks of birds and other creatures, such as schools of fish or herds of animals.

Boids work in a manner similar to cellular automata, since each boid acts autonomously and references a neighbourhood, as do cellular automata.

Epistemology

Epistemology or theory of knowledge is the branch of philosophy that studies the nature and scope of knowledge. The term epistemology is based on the Greek words episteme (meaning knowledge) and logos (meaning account/explanation); it is thought to have been coined by the Scottish philosopher James Frederick Ferrier.

Much of the debate in this field has focused on analyzing the nature of knowledge and how it relates to similar notions such as truth, belief, and justification. It also deals with the means of production of knowledge, and skepticism about different knowledge claims. In other words, epistemology addresses the questions, What is knowledge? How is knowledge acquired? and, What do people know? Although approaches to answering any one of these questions frequently involve theories connected to others (i.e. some theories of what knowledge is being influenced by broad views as to what people know, with restrictive definitions of knowledge thereby dismissed), there is enough particularized to each that they may be treated of separately.

There are many different topics, stances, and arguments in the field of epistemology. Recent studies have dramatically challenged centuries-old assumptions, and it therefore continues to be vibrant and dynamic.

SemaCode

Semacode is a private company and also this company s trade name for machine-readable ISO/IEC 16022 Data Matrix symbols which encode internet Uniform Resource Locators (URLs). It is primarily aimed at being used with cellular phones which have built-in cameras. The Data Matrix specification is given by the ISO/IEC 16022 standard.

Using Semacode SDK software, a URL can be converted into a type of barcode resembling a crossword puzzle, which is called a tag . Tags can be quickly captured with a mobile phone s camera and decoded to obtain a Web site address. This address can then be accessed via the phone s web browser.

Real Time Operating System

A real time system is defined as follows - A real-time system is one in which the correctness of the computations not only depends upon the logical correctness of the computation but also upon the time at which the result is produced. If the timing constraints of the system are not met, system failure is said to be occurred.


Two types Hard real time operating system Strict time constraints Secondary storage limited or absent Conflicts with the time sharing systems Not supported by general purpose OS Soft real time operating system Reduced Time Constraints Limited utility in industrial control or robotics Useful in applications (multimedia, virtual reality) requiring advanced operating-system features. In the robot example, it would be hard real time if the robot arriving late causes completely incorrect operation. It would be soft real time if the robot arriving late meant a loss of throughput. Much of what is done in real time programming is actually soft real time system. Good system design often implies a level of fe/correct behaviour even if the computer system never completes the computation. So if the computer is only a little late, the system effects may be somewhat mitigated.

Hat makes an os a rtos?

1. A RTOS (Real-Time Operating System) has to be multi-threaded and preemptible.

2. The notion of thread priority has to exist as there is for the moment no deadline driven OS.

3. The OS has to support predictable thread synchronisation mechanisms

4. A system of priority inheritance has to exist

5. For every system call, the maximum it takes. It should be predictable and independent from the number of objects in the system

6. the maximum time the OS and drivers mask the interrupts. The following points should also be known by the developer:

1. System Interrupt Levels.

2. Device driver IRQ Levels, maximum time they take, etc.

Cryptovirology

Cryptovirology is a field that studies how to use cryptography to design powerful malicious software. It encompasses overt attacks such as cryptoviral extortion where a cryptovirus, cryptoworm, or cryptotrojan hybrid encrypts the victim's files and the user must pay the malware author to receive the needed session key (that is encrypted under the author's public key that is contained in the malware). The field also encompasses covert attacks in which the attacker secretly steals private information such as private keys. An example of the latter type of attack are asymmetric backdoors. An asymmetric backdoor is a backdoor (e.g., in a cryptosystem) that can only be used by the attacker even after it is found. There are many other attacks in the field that are not mentioned here

Artificial Intelligence for Speech Recognition

rtificial Intelligence (AI) involves two basic ideas. First, it involves studying the thought processes of human beings. Second, it deals with representing those processes via machines (computers, robots, etc). AI is the behavior of a machine, which, if performed by a human being, would be called intelligent. It makes machines smarter and more useful, is less expensive than natural intelligence. Natural Language Processing (NLP) refers to Artificial Intelligence methods of communicating with a computer in a natural language like English. The main objective of a NLP program is to understand input and initiate action.


The input words are scanned and matched against internally stored known words. Identification of a keyword causes some action to be taken. In this way, one can communicate with computer in one’s language. One of the main benefits of speech recognition system is that it lets user do other works simultaneously

Softwear Computing

In this talk, we introduce computational aspects of softwear, specifically fabric and body-based gestural controllers for realtime, time-based media.

Softwear is part of our approach to wearable computing that leverages the naturalized affordances and the social conditioning that fabrics, furniture and physical architecture already provide to our everyday interaction. We exploit physical plus computational materials and rely on expert craft from experimental performance, music and plastic arts in order to make a new class of personal and collective expressive media. In this talk, I will survey Topological Media Lab research areas including gesture tracking, realtime video synthesis, realtime audio synthesis, and media choreography based on continuous state evolution.

Traffic Pulse Technology

The Traffic Pulse network is the foundation for all of Mobility Technologies® applications. This network uses a process of data collection, data processing, and data distribution to generate the most unique traffic information in the industry. Digital Traffic Pulse® collects data through a sensor network, processes and stores the data in a data center, and distributes that data through a wide range of applications.

Unique among private traffic information providers in the U.S. , Mobility Technologies real-time and archived Traffic Pulse data offer valuable tools for a variety of commericial and governmental applications:

* Telematics - for mobile professionals and others, Mobility Technologies traffic information complements in-vehicle navigation devices, informing drivers not only how to get from point A to point B but how long it will take to get there — or even direct them to an alternate route.
* Media - for radio and TV broadcasters, cable operators, and advertisers who sponsor local programming, Traffic Pulse Networks provides traffic information and advertising opportunities for a variety of broadcasting venues.
* Intelligent Transport business solutions (ITS) - for public agencies, Mobility Technologies applications aid in infrastructure planning, safety research, and livable community efforts; integrate with existing and future ITS technologies and deployments; and provide data reporting tools.

Blue Tooth

Bluetooth wireless technology is a cable replacement technology that provides wireless communication between portable devices, desktop devices and peripherals. It is used to swap data and synchronize files between devices without having to connect each other with cable. The wireless link has a range of 10m which offers the user mobility. There is no need for the user to open an application or press button to initiate a process. Bluetooth wireless technology is always on and runs in the background. Bluetooth devices scan for other Bluetooth devices and when these devices are in range they start to exchange messages so they can become aware of each others capabilities. These devices do not require a line of sight to transmit data with each other. Within a few years about 80 percent of the mobile phones are expected to carry the Bluetooth chip. The Bluetooth transceiver operates in the globally available unlicensed ISM radio band of 2.4GHz, which do not require operator license from a regulatory agency. This means that Bluetooth technology can be used virtually anywhere in the world. Bluetooth is an economical, wireless solution that is convenient, reliable, and easy to use and operates over a longer distance.

The initial development started in 1994 by Ericsson. Bluetooth now has a special interest group (SIG) which has 1800 companies worldwide. Bluetooth technology enables voice and data transmission in a short-range radio. There is a wide range of devises which can be connected easily and quickly without the need for cables. Soon people world over will enjoy the convenience, speed and security of instant wireless connection. Bluetooth is expected to be embedded in hundreds of millions mobile phones, PCs, laptops and a whole range of other electronic devices in the next few years. This is mainly because of the elimination of cables and this makes the work environment look and feel comfortable and inviting.

A T M

These computers include the entire spectrum of PCs, through professional workstations upto super-computers. As the performance of computers has increased, so too has the demand for communication between all systems for exchanging data, or between central servers and the associated host computer system.

The replacement of copper with fiber and the advancement sin digital communication and encoding are at the heart of several developments that will change the communication infrastructure. The former development has provided us with huge amount of transmission bandwidth. While the latter has made the transmission of all information including voice and video through a packet switched network possible.


With continuously work sharing over large distances, including international communication, the systems must be interconnected via wide area networks with increasing demands for higher bit rates.


For the first time, a single communications technology meets LAN and WAN requirements and handles a wide variety of current and emerging applications. ATM is the first technology to provide a common format for bursts of high speed data and the ebb and flow of the typical voice phone call. Seamless ATM networks provide desktop-to-desktop multimedia networking over single technology, high bandwidth, low latency network, removing the boundary between LAN WAN.


ATM is simply a Data Link Layer protocol. It is asynchronous in the sense that the recurrence of the cells containing information from an individual user is not necessarily periodic. It is the technology of choice for evolving B-ISDN (Board Integrated Services Digital Network), for next generation LANs and WANs. ATM supports transmission speeds of 155Mbits / sec. In the future. Photonic approaches have made the advent of ATM switches feasible, and an evolution towards an all packetized, unified, broadband telecommunications and data communication world based on ATM is taking place.

Clockless Chips

Clock speeds are now in the gigahertz range and there is not much room for speedup before physical realities start to complicate things. With gigahertz clock powering a chip, signals barely have enough time to make it across the chip before the next clock tick. At this point, speeding up the clock frequency could become disastrous.

This is where a chip that is not constricted by clock comes in to action.


Clockless approach, which uses a technique known as asynchronous logic, differs from conventional computer circuit design in that the switching on and off of digital circuits is controlled individually by specific pieces of data rather than by a tyrannical clock that forces all of the millions of the circuits on a chip to march in unison.


A major hindrance to the development of the clockless chips is the competitiveness of the computer industry. Presently, it is nearly impossible for companies to develop and manufacture a Clockless chip while keeping the cost reasonable. Another problem is that there aren’t much tools used to develop asynchronous chips. Until this is possible, Clockless chips will not be a major player in the market.

In this seminar the topics covered are – general concept of asynchronous circuits, their design issues and types of design. The major designs discussed are Bounded delay method, Delay insensitive method & the Null Conventional Logic (NCL).

The seminar also does a comparison of synchronous and asynchronous circuits and the applications in which asynchronous circuits are used.

Holographic Memory

Devices that use light to store and read data have been the backbone of data storage for nearly two decades. Compact discs revolutionized data storage in the early 1980s, allowing multi-megabytes of data to be stored on a disc that has a diameter of a mere 12 centimeters and a thickness of about 1.2 millimeters. In 1997, an improved version of the CD, called a digital versatile disc (DVD), was released, which enabled the storage of full-length movies on a single disc.

CDs and DVDs are the primary data storage methods for music, software, personal computing and video. A CD can hold 783 megabytes of data. A double-sided, double-layer DVD can hold 15.9 GB of data, which is about eight hours of movies. These conventional storage mediums meet today's storage needs, but storage technologies have to evolve to keep pace with increasing consumer demand. CDs, DVDs and magnetic storage all store bits of information on the surface of a recording medium. In order to increase storage capabilities, scientists are now working on a new optical storage method called holographic memory that will go beneath the surface and use the volume of the recording medium for storage, instead of only the surface area. Three-dimensional data storage will be able to store more information in a smaller space and offer faster data transfer times.

Holographic memory is developing technology that has promised to revolutionalise the storage systems. It can store data upto 1 Tb in a sugar cube sized crystal. Data from more than 1000 CDs can fit into a holographic memory System. Most of the computer hard drives available today can hold only 10 to 40 GB of data, a small fraction of what holographic memory system can hold. Conventional memories use only the surface to store the data. But holographic data storage systems use the volume to store data. It has more advantages than conventional storage systems. It is based on the principle of holography.

Scientist Pieter J. van Heerden first proposed the idea of holographic (three-dimensional) storage in the early 1960s. A decade later, scientists at RCA Laboratories demonstrated the technology by recording 500 holograms in an iron-doped lithium-niobate crystal and 550 holograms of high-resolution images in a light-sensitive polymer material. The lack of cheap parts and the advancement of magnetic and semiconductor memories placed the development of holographic data storage on hold

WiFiber

A new wireless technology could beat fiber optics for speed in some applications.

Atop each of the Trump towers in New York City, there s a new type of wireless transmitter and receiver that can send and receive data at rates of more than one gigabit per second -- fast enough to stream 90 minutes of video from one tower to the next, more than one mile apart, in less than six seconds. By comparison, the same video sent over a DSL or cable Internet connection would take almost an hour to download.

This system is dubbed WiFiber by its creator, GigaBeam, a Virginia-based telecommunications startup. Although the technology is wireless, the company s approach -- high-speed data transferring across a point-to-point network -- is more of an alternative to fiber optics, than to Wi-Fi or Wi-Max, says John Krzywicki, the company s vice president of marketing. And it s best suited for highly specific data delivery situations.

This kind of point-to-point wireless technology could be used in situations where digging fiber-optic trenches would disrupt an environment, their cost be prohibitive, or the installation process take too long, as in extending communications networks in cities, on battlefields, or after a disaster.
Blasting beams of data through free space is not a new idea. LightPointe and Proxim Wireless also provide such services. What makes GigaBeam s technology different is that it exploits a different part of the electromagnetic spectrum. Their systems use a region of the spectrum near visible light, at terahertz frequencies. Because of this, weather conditions in which visibility is limited, such as fog or light rain, can hamper data transmission.

GigaBeam, however, transmits at 71-76, 81-86, and 92-95 gigahertz frequencies, where these conditions generally do not cause problems. Additionally, by using this region of the spectrum, GigaBeam can outpace traditional wireless data delivery used for most wireless networks.

Because so many devices, from Wi-Fi base stations to baby monitors, use the frequencies of 2.4 and 5 gigahertz, those spectrum bands are crowded, and therefore require complex algorithms to sort and route traffic -- both data-consuming endeavors, says Jonathan Wells, GigaBeam s director of product development. With less traffic in the region between 70 to 95 gigahertz, GigaBeam can spend less time routing data, and more time delivering it. And because of the directional nature of the beam, problems of interference, which plague more spread-out signals at the traditional frequencies, are not likely; because the tight beams of data will rarely, if ever, cross each other s paths, data transmission can flow without interference, Wells says.

Correction: As a couple of readers pointed out, our title was misleading. Although the emergence of a wireless technology operating in the gigabits per second range is an advance, it does not outperform current fiber-optic lines, which can still send data much faster.

Even with its advances, though, Gigabeam faces the same problem as other point-to-point technologies: creating a network with an unbroken sight line. Still, it could offer some businesses an alternative to fiber optics. Currently, a GigaBeam link, which consists of a set of transmitting and receiving radios, costs around $30,000. But Krzywicki says that improving technology is driving down costs. In addition to outfitting the Trump towers, the company has deployed a link on the campuses of Dartmouth College and Boston University, and two links for San Francisco s Public Utility Commission.

Wi-Fi

The typical Wi-Fi setup contains one or more Access Points (APs) and one or more clients. An AP broadcasts its SSID (Service Set Identifier, Network name) via packets that are called beacons, which are broadcasted every 100 ms. The beacons are transmitted at 1 Mbit/s, and are relatively short and therefore are not of influence on performance. Since 1 Mbit/s is the lowest rate of Wi-Fi it assures that the client who receives the beacon can communicate at at least 1 Mbit/s. Based on the settings (i.e. the SSID), the client may decide whether to connect to an AP. Also the firmware running on the client Wi-Fi card is of influence. Say two AP's of the same SSID are in range of the client, the firmware may decide based on signal strength (Signal-to-noise ratio) to which of the two AP's it will connect. The Wi-Fi standard leaves connection criteria and roaming totally open to the client. This is a strength of Wi-Fi, but also means that one wireless adapter may perform substantially better than the other. Since Windows XP there is a feature called zero configuration which makes the user show any network available and let the end user connect to it on the fly. In the future wireless cards will be more and more controlled by the operating system. Microsoft's newest feature called SoftMAC will take over from on-board firmware. Having said this, roaming criteria will be totally controlled by the operating system. Wi-Fi transmits in the air, it has the same properties as non-switched ethernet network. Even collisions can therefore appear like in non-switched ethernet LAN's.


Wi-Fi vs. cellular

Some argue that Wi-Fi and related consumer technologies hold the key to replacing cellular telephone networks such as GSM. Some obstacles to this happening in the near future are missing roaming and authentication features (see 802.1x, SIM cards and RADIUS), the narrowness of the available spectrum and the limited range of Wi-Fi. It is more likely that WiMax could compete with other cellular phone protocols such as GSM, UMTS or CDMA. However, Wi-Fi is ideal for VoIP applications like in a corporate LAN or SOHO environment. Early adopters were already available in the late '90s, though not until 2005 did the market explode. Companies such as Zyxell, UT Starcomm, Samsung, Hitachi and many more are offering VoIP Wi-Fi phones for reasonable prices.

In 2005 ADSL ISP providers started to offer VoIP services to their customers (eg. the dutch ISP XS4All). Since calling via VoIP is low-cost and more often being free, VoIP enabled ISPs have the potential to open up the VoIP market. GSM phones with integrated Wi-Fi & VoIP capabilities are being introduced into the market and have the potential to replace land line telephone services.

Currently it seems unlikely that Wi-Fi will directly compete against cellular. Wi-Fi-only phones have a very limited range, and so setting up a covering network would be too expensive. Therefore these kinds of phones may be best reserved for local use such as corporate networks. However, devices capable of multiple standards may well compete in the market.


Commercial Wi-Fi

Commercial Wi-Fi services are available in places such as Internet cafes, coffee houses and airports around the world (commonly called Wi-Fi-cafés), although coverage is patchy in comparison with cellular:

  • Ozone and OzoneParis In France, in September 2003, Ozone started deploying the OzoneParis network across the city of lights. The objective: to construct a wireless metropolitan network with full Wi-Fi coverage of Paris. Ozone Pervasive Network philosophy is based on a nationwide scale.
  • WiSE Technologies provides commercial hotspots for airports, universities, and independent cafes in the US;
  • T-Mobile provides hotspots in many Starbucks in the U.S, and UK;
  • Pacific Century Cyberworks provides hotspots in Pacific Coffee shops in Hong Kong;
  • a Columbia Rural Electric Association subsidiary offers 2.4 GHz Wi-Fi service across a 3,700 mi² (9,500 km²) region within Walla Walla and Columbia counties in Washington and Umatilla County, Oregon;
  • Other large hotspot providers in the U.S. include Boingo, Wayport and iPass;
  • Sify, an Indian internet service provider, has set up 120 wireless access points in Bangalore, India in hotels, malls and government offices.
  • Vex offers a big network of hotspots spread over Brazil. Telefónica Speedy WiFi has started its services in a new and growing network distributed over the state of São Paulo.
  • Link repository on Wi-Fi topics at AirHive Net


Universal Efforts

Another business model seems to be making its way into the news. The idea is that users will share their bandwidth through their personal wireless routers, which are supplied with specific software. An example is FON, a Spanish start-up created in November 2005. It aims to become the largest network of hotspots in the world by the end of 2006 with 30 000 access points. The users are divided into three categories: linus share Internet access for free; bills sell their personal bandwidth; and aliens buy access from bills. Thus the system can be described as a peer-to-peer sharing service, which we usually relate to software.

Although FON has received some financial support by companies like Google and Skype, it remains to be seen whether the idea can actually work. There are three main challenges for this service at the moment. The first is that it needs much media and community attention first in order to get though the phase of 'early adoption' and into the mainstream. Then comes the fact that sharing your Internet connection is often against the terms of use of your ISP. This means that in the next few months we can see ISPs trying to defend their interests in the same way music companies united against free MP3 distribution. And third, the FON software is still in Beta-version and it remains to be seen if it presents a good solution of the imminent security issues...


Free Wi-Fi

While commercial services attempt to move existing business models to Wi-Fi, many groups, communities, cities, and individuals have set up free Wi-Fi networks, often adopting a common peering agreement in order that networks can openly share with each other. Free wireless mesh networks are often considered the future of the internet.

Many municipalities have joined with local community groups to help expand free Wi-Fi networks. Some community groups have built their Wi-Fi networks entirely based on volunteer efforts and donations.

For more information, see wireless community network, where there is also a list of the free Wi-Fi networks one can find around the globe.

OLSR is one of the protocols used to set up free networks. Some networks use static routing; others rely completely on OSPF. Wireless Leiden developed their own routing software under the name LVrouteD for community wi-fi networks that consist of a completely wireless backbone. Most networks rely heavily on open source software, or even publish their setup under an open source license.

Some smaller countries and municipalities already provide free Wi-Fi hotspots and residential Wi-Fi internet access to everyone. Examples include the Kingdom of Tonga or Estonia which have already a large number of free Wi-Fi hotspots throughout their countries.

In Paris France, OzoneParis offers free Internet access for life to anybody who contributes to the Pervasive Network’s development by making their rooftop available for the WiFi Network.

Many universities provide free WiFi internet access to their students, visitors, and anyone on campus. Similarly, some commercial entities such as Panera Bread offer free Wi-Fi access to patrons. McDonald's Corporation also offers Wi-Fi access, often branded 'McInternet'. This was launched at their flagship restaurant in Oak Brook, Illinois and is also available in many branches in London, UK.

However, there is also a third subcategory of networks set up by certain communities such as universities where the service is provided free to members and guests of the community such as students, yet used to make money by letting the service out to companies and individuals outside. An example of such a service is Sparknet in Finland. Sparknet also supports OpenSparknet, a project where people can name their own wireless access point as a part of Sparknet in return for certain benefits.

Recently commercial Wi-Fi providers have built free Wi-Fi hotspots and hotzones. These providers hope that free Wi-Fi access would equate to more users and significant return on investment.


Wi-Fi vs. Amateur Radio

In the US, the 2.4 GHz Wi-Fi radio spectrum is also allocated to amateur radio users. FCC Part 15 rules govern non-licenced operators (i.e. most Wi-Fi equipment users). Amateur operators retain what the FCC terms 'primary status' on the band under a distinct set of rules (Part 97). Under Part 97, licensed amateur operators may construct their own equipment, use very high-gain antennas, and boost output power to 100 watts on frequencies covered by Wi-Fi channels 2-6. However, Part 97 rules mandate using only the minimum power necessary for communications, forbid obscuring the data, and require station identification every 10 minutes. Therefore, expensive automatic power-limiting circuitry is required to meet regulations, and the transmission of any encrypted data (for example https) is questionable.

In practice, microwave power amplifiers are expensive and decrease receive-sensitivity of link radios. On the other hand, the short wavelength at 2.4 GHz allows for simple construction of very high gain directional antennas. Although Part 15 rules forbid any modification of commercially constructed systems, amateur radio operators may modify commercial systems for optimized construction of long links, for example. Using only 200 mW link radios and two 24 dB gain antennas, an effective radiated power of many hundreds of watts in a very narrow beam may be used to construct reliable links of over 100 km with little radio frequency interference to other users.


Advantages of Wi-Fi

  • Unlike packet radio systems, Wi-Fi uses unlicensed radio spectrum and does not require regulatory approval for individual deployers.
  • Allows LANs to be deployed without cabling, potentially reducing the costs of network deployment and expansion. Spaces where cables cannot be run, such as outdoor areas and historical buildings, can host wireless LANs.
  • Wi-Fi products are widely available in the market. Different brands of access points and client network interfaces are interoperable at a basic level of service.
  • Competition amongst vendors has lowered prices considerably since their inception.
  • Wi-Fi networks support roaming, in which a mobile client station such as a laptop computer can move from one access point to another as the user moves around a building or area.
  • Many access points and network interfaces support various degrees of encryption to protect traffic from interception.
  • Wi-Fi is a global set of standards. Unlike cellular carriers, the same Wi-Fi client works in different countries around the world.

Symfony

Symfony is a web application framework for PHP5 projects.

It aims to speed up the creation and maintenance of web applications, and to replace the repetitive coding tasks by power, control and pleasure.

The very small number of prerequisites make symfony easy to install on any configuration; you just need Unix or Windows with a web server and PHP 5 installed. It is compatible with almost every database system. In addition, it has a very small overhead, so the benefits of the framework don t come at the cost of an increase of hosting costs.

Using symfony is so natural and easy for people used to PHP and the design patterns of Internet applications that the learning curve is reduced to less than a day. The clean design and code readability will keep your delays short. Developers can apply agile development principles (such as DRY, KISS or the XP philosophy) and focus on applicative logic without losing time to write endless XML configuration files.

Symfony is aimed at building robust applications in an enterprise context. This means that you have full control over the configuration: from the directory structure to the foreign libraries, almost everything can be customized. To match your enterprise s development guidelines, symfony is bundled with additional tools helping you to test, debug and document your project.

Last but not least, by choosing symfony you get the benefits of an active open-source community. It is entirely free and published under the MIT license.

Symfony is sponsored by Sensio, a French Web Agency.

Genetic programming

Genetic programming (GP) is an automated methodology inspired by biological evolution to find computer programs that best perform a user-defined task. It is therefore a particular machine learning technique that uses an evolutionary algorithm to optimize a population of computer programs according to a fitness landscape determined by a program's ability to perform a given computational task. The first experiments with GP were reported by Stephen F. Smith (1980) and Nichael L. Cramer (1985), as described in the famous book Genetic Programming: On the Programming of Computers by Means of Natural Selection by John Koza (1992).

Computer programs in GP can be written in a variety of programming languages. In the early (and traditional) implementations of GP, program instructions and data values were organized in tree-structures, thus favoring the use of languages that naturally embody such a structure (an important example pioneered by Koza is Lisp). Other forms of GP have been suggested and successfully implemented, such as the simpler linear representation which suits the more traditional imperative languages [see, for example, Banzhaf et al. (1998)]. The commercial GP software Discipulus, for example, uses linear genetic programming combined with machine code language to achieve better performance. Differently, the MicroGP uses an internal representation similar to linear genetic programming to generate programs that fully exploit the syntax of a given assembly language.

GP is very computationally intensive and so in the 1990s it was mainly used to solve relatively simple problems. However, more recently, thanks to various improvements in GP technology and to the well known exponential growth in CPU power, GP has started delivering a number of outstanding results. At the time of writing, nearly 40 human-competitive results have been gathered, in areas such as quantum computing, electronic design, game playing, sorting, searching and many more. These results include the replication or infringement of several post-year-2000 inventions, and the production of two patentable new inventions.

Developing a theory for GP has been very difficult and so in the 1990s genetic programming was considered a sort of pariah amongst the various techniques of search. However, after a series of breakthroughs in the early 2000s, the theory of GP has had a formidable and rapid development. So much so that it has been possible to build exact probabilistic models of GP (schema theories and Markov chain models) and to show that GP is more general than, and in fact includes, genetic algorithms.

Genetic Programming techniques have now been applied to evolvable hardware as well as computer programs.

Meta-Genetic Programming is the technique of evolving a genetic programming system using genetic programming itself. Critics have argued that it is theoretically impossible, but more research is needed.

OWL

OWL is an acronym for Web Ontology Language, a markup language for publishing and sharing data using ontologies on the Internet. OWL is a vocabulary extension of the Resource Description Framework (RDF) and is derived from the DAML+OIL Web Ontology Language (see also DAML and OIL). Together with RDF and other components, these tools make up the Semantic Web project.

OWL represents the meanings of terms in vocabularies and the relationships between those terms in a way that is suitable for processing by software.

The OWL specification is maintained by the World Wide Web Consortium (W3C).

OWL currently has three flavors: OWL Lite, OWL DL, and OWL Full. These flavors incorporate different features, and in general it is easier to reason about OWL Lite than OWL DL and OWL DL than OWL Full. OWL Lite and OWL DL are constructed in such a way that every statement can be decided in finite time; OWL Full can contain endless loops

VOIP in mobile phones

Today when mobile phones rule the world of communication, the word communication has become too much of an expensive affair. The cheapest of calling is still Pc to pc calling, since it has a VolP in it. Let us see the disadvantages and advantages of using VolP in mobile phones using different networks like the GPRS/EDGE, Bluethooth, WiFi etc.

Ruby on Rails

Ruby on Rails, often called RoR, or just Rails written in , is an open source web application frameworkRuby that closely follows the Model-View-Controller (MVC) architecture. It strives for simplicity and allowing real-world applications to be developed in less code than other frameworks and with a minimum of configuration.

The Ruby programming language allows for extensive metaprogramming, which Rails makes much use of. This results in a syntax that many of its users find to be very readable. Rails is primarily distributed through RubyGems, which is the official packaging format and distribution channel for Ruby libraries and applications.

AJAX

Ajax, shorthand for Asynchronous JavaScript and XML, is a web development technique for creating interactive web applications. The intent is to make web pages feel more responsive by exchanging small amounts of data with the server behind the scenes, so that the entire web page does not have to be reloaded each time the user makes a change. This is meant to increase the web page s interactivity, speed, and usability.

The Ajax technique uses a combination of:
XHTML (or HTML) and CSS, for marking up and styling information.
The DOM accessed with a client-side scripting language, especially ECMAScript implementations such as JavaScript and JScript, to dynamically display and interact with the information presented.
The XMLHttpRequest object to exchange data asynchronously with the web server. In some Ajax frameworks and in certain situations, an IFrame object is used instead of the XMLHttpRequest object to exchange data with the web server.
XML is sometimes used as the format for transferring data between the server and client, although any format will work, including preformatted HTML, plain text, JSON and even EBML.
Like DHTML, LAMP and SPA, Ajax is not a technology in itself, but a term that refers to the use of a group of technologies together.

A simple script
function deleterow(id) {

if (confirm( Are you sure you want to delete row number + id + ? )) {

// Set up the request
var xmlhttp = new XMLHttpRequest();
xmlhttp.open( POST , mycheck.php , true);

// The callback function
xmlhttp.onreadystatechange = function() {
if (xmlhttp.readyState == 4) { // implies the request is sucessfull
if (xmlhttp.status == 200) {
var response_stat = xmlhttp.responseXML.getElementsByTagName( deleted )[0].firstChild.data;
if (response_stat == 1) {
alert( Deleted row successfully. );
} else {
alert( Failed to delete row: + xmlhttp.responseXML.getElementsByTagName( error )[0].firstChild.data + . );
}
}
}
}

// Send the POST request
xmlhttp.setRequestHeader( Content-Type , application/x-www-form-urlencoded );
xmlhttp.send( row= + id);
}
}

SIP - Session Initiation Protocol

Session Initiation Protocol
Session Initiation Protocol (SIP) is a protocol developed by IETF MMUSIC Working Group and proposed standard for initiating, modifying, and terminating an interactive user session that involves multimedia elements such as video, voice, instant messaging, online games, and virtual reality.
SIP clients traditionally use TCP and UDP port 5060 to connect to SIP servers and other SIP endpoints. SIP is primarily used in setting up and tearing down voice or video calls. However, it can be used in any application where session initiation is a requirement. These include, Event Subscription and Notification, Terminal mobility and so on. There are a large number of SIP-related RFCs that define behavior for such applications. All voice/video communications are done over RTP.
A motivating goal for SIP was to provide a signaling and call setup protocol for IP-based communications that can support a superset of the call processing functions and features present in the public switched telephone network (PSTN).
SIP enabled telephony networks can also implement many of the more advanced call processing features present in Signalling System 7 (SS7), though the two protocols themselves are very different. SS7 is a highly centralized protocol, characterized by highly complex central network architecture and dumb endpoints (traditional telephone handsets). SIP is a peer-to-peer protocol.

SIP network elements

Hardware endpoints, devices with the look, feel, and shape of a traditional telephone, but that use SIP and RTP for communication, are commercially available from several vendors. Some of these can use Electronic Numbering (ENUM) or DUNDi to translate existing phone numbers to SIP addresses using DNS, so calls to other SIP users can bypass the telephone network, even though your service provider might normally act as a gateway to the PSTN network for traditional phone numbers (and charge you for it).

SIP makes use of elements called proxy servers to help route requests to the user s current location, authenticate and authorize users for services, implement provider call-routing policies, and provide features to users.
SIP also provides a registration function that allows users to upload their current locations for use by proxy servers.
Since registrations play an important role in SIP, a User Agent Server that handles a REGISTER is given the special name registrar.
It is an important concept that the distinction between types of SIP servers is logical, not physical.

Mobile agent

In computer science, a mobile agent is a composition of computer software and data which is able to migrate (move) from one computer to another autonomously and continue its execution on the destination computer.
Mobile Agent, namely, is a type of software agent, with the feature of autonomy, social ability, learning, and most important, mobility.
When the term mobile agent is used, it refers to a process that can transport its state from one environment to another, with its data intact, and still being able to perform appropriately in the new environment. Mobile agents decide when and where to move next, which is evolved from RPC. So how exactly does a mobile agent move? Just like a user doesn t really visit a website but only make a copy of it, a mobile agent accomplishes this move through data duplication. When a mobile agent decides to move, it saves its own state and transports this saved state to next host and resume execution from the saved state.
Mobile agents are a specific form of mobile code and software agents paradigms. However, in contrast to the Remote evaluation and Code on demand paradigms, mobile agents are active in that they may choose to migrate between computers at any time during their execution. This makes them a powerful tool for implementing distributed applications in a computer network.

Advantages
1) Move computation to data, reducing network load.
2) Asynchronous execution on multiple heterogeneous network hosts
3) Dynamic adaptation - actions are dependent on the state of the host environment
4) Tolerant to network faults - able to operate without an active connection between client and server
5) Flexible maintenance - to change an agent s actions, only the source (rather than the computation hosts) must be updated

Applications
1) Resource availability, discovery, monitoring
2) Information retrieval
3) Network management
4) Dynamic software deployment

Choreography

Choreography, in a Web services context, refers to specifications for how messages should flow among diverse, interconnected components and applications to ensure optimum interoperability. The term is borrowed from the dance world, in which choreography directs the movement and interactions of dancers.

Web services choreography can be categorized as abstract, portable or concrete:

  • In abstract choreography, exchanged messages are defined only according to the data type and transmission sequence.
  • Portable choreography defines the data type, transmission sequence, structure, control methods and technical parameters.
  • Concrete choreography is similar to portable choreography but includes, in addition, the source and destination URLs as well as security information such as digital certificates.

Stream computing

The main task is to pull in streams of data, process the data and stream it back out as a single flow and thereby analyzes multiple data streams from many sources live. Stream computing uses software algorithms that analyzes the data in real time as it streams in to increase speed and accuracy when dealing with data handling and analysis. System S, the stream computing system of IBM, introduced in June 2007, runs on 800 microprocessors and the System S software enables software applications to split up tasks and then reassemble the data into an answer. ATI Technologies also announced a stream computing technology derived from a class of applications that run on the GPU instead of a CPU which enables the graphics processors (GPUs) to work in conjunction with high-performance, low-latency CPUs to solve complex computational problems.

Pervasive computing

Ubiquitous computing or pervasive computing is the result of computer technology advancing at exponential speeds -- a trend toward all man-made and some natural products having hardware and software. With each day computing devices become progressively smaller and more powerful. Pervasive computing goes beyond the realm of personal computers: it is the idea that almost any device, from clothing to tools to appliances to cars to homes to the human body to your coffee mug, can be imbedded with chips to connect the device to an infinite network of other devices.

The main aim of pervasive computing, which combines current network technologies with wireless computing, voice recognition, Internet capability and artificial intelligence, is to create an environment where the connectivity of devices is embedded in such a way that the connectivity is unobtrusive and always available.

Data gram Congestion Control Protocol (DCCP)

Almost all internet applications like the streaming media and internet telephone favour reliability than time. This factor makes the TCP factor a less preferred choice. Even the UDP network cannot make up to this level as it is in short of the congestion control. These congested controlled networks are usually causes great risk factors for the use of high bandwidth UDP applications. So naturally there was need for designing a congestion-controlled unreliable transport protocol. The result, Data gram Congestion Control Protocol or DCCP gets added to UDP like foundation the minimum mechanisms necessary to support congestion control. The resulting protocol sheds light on how congestion control interacts with unreliable transport, how modern network constraints impact protocol design, and how TCP reliable byte stream semantics intertwine with its other mechanisms, including congestion control.

Choke packet

The picture of an irate system administrator trying to choke their router is what comes to mind when you see this term. While we think this should be used in anger management seminars for network administrators, sadly the term choke packet is already taken and being used to describe a specialized packet that is used for flow control along a network

Holographic Versatile Disc (HVD)

It's an optical disc technology still in the childhood stages of research called as the collinear holography, would soon gain upper hand over the existing technologies like the blue-ray and HD DVD optical disc systems with respect to its storage capacity. Consisting of a blue-green laser and a red laser collimated in a single beam, the blue-green laser reads data encoded as laser interference fringes from a holographic layer near the top of the disc while the red laser is used as the reference beam and to read servo information from a regular CD-style aluminium layer near the bottom.

Trans European Services for Telematics between Administrations (TESTA)

The Trans European Services for Telematics between Administrations (TESTA) system is the private IP-based network of the European Community. TESTA is a telecommunications interconnection platform for secure information exchange between the European public administrations.

TESTA is not a single network, but a network of networks, composed of the EuroDomain backbone and Local Domain networks. The EuroDomain is a European backbone network for administrative data exchanges acting as a network communication platform between local administrations.

Cooperative Linux

Employing the novel idea of a co-operative virtual machine(CVM) co-operative linux or colinx can support the kernel of both the Microsoft windows and the linux, to run parallel on the same machine. Unlike in the traditional VMs where every guest is in the unprivileged mode to control the real machine also where the resources are virtualised for every OS, the CVM gives both the OSs complete control of the host machine. Since the system hardware of today is not compatible to deal with two different operating systems at a time, the word co-operative used to denote two bodies working in parallel would only theoretically suite the whole idea.

So, although each of the kernels has its own complete CPU context and address space, and can also make a decision when to give control back to its associate what really happens is that the host kernel is left in control of the real hardware and the guest kernel that has some special drivers can communicate with the host and provide various important devices to the guest OS.

C3D

C3D has had a breakthrough in the 3-D arena with their FMD technology, which allows multiple layers of data to be printed onto the surface of a CD-sized 12 cm disk. What sets FMD apart is the sheer number of layers that are made possible.

C3D�s fluorescent technology could scale up to an impressive 1.4 terabytes of data storage when applied on a single sided 12 cm disk with 100 layers. With S3D�s new FMD technologies, gigabytes will replace megabytes as data storage�s common currency.

Wolfpack

Officially known as Microsoft Cluster Server (MSCS), it was released in September, 1997 as part of Windows NT 4.0, enterprise Edition. Wolfpack is the codename used for Microsoft's clustering solution

WISENET

WISENET is a wireless sensor network that monitors the environmental conditions such as light, temperature, and humidity. This network is comprised of nodes called �motes� that form an ad-hoc network to transmit this data to a computer that function as a server.

The server stores the data in a database where it can later be retrieved and analyzed via a web-based interface. The network works successfully with an implementation of one sensor mote.

Bayesian Networks

Bayesian Networks are becoming an increasingly important area for research and application in the entire field of Artificial Intelligence. This paper explores the nature and implications for Bayesian Networks beginning with an n overview and comparison of inferential statistics and Bayes� theorem. The nature, relevance and applicability for Bayesian Network theories for advanced computability form the core of the current discussion.

A number of current applications using Bayesian networks are examined. The paper concludes with a brief description of the appropriateness and limitations of Bayesian networks for human-computer interactions and automated learning.

Zombie

In the world of UNIX, a zombie refers to a 'child' program that was started by a 'parent' program but then abandoned by the parent. Zombie is also used to describe a computer that has been implanted with a daemon that puts it under the control of a malicious hacker without the knowledge of the computer owner.

MeSCoDe

File Management is a very relevant application in computer field. �MeSCoDe� software focuses on splitting files in a user friendly manner, merging different source files into a single one, compressing a file that can save large extend of memory decompressing compressed files to produce the original file. Compression and decompression are very useful for durable data.

The software named �MeSCoDe� has been developed using C++. The title �MeSCoDe� is the abbreviation of Merging, Splitting, Compression and Decompression. All these facilities can be done using �MeSCoDe�.

Java Cryptography Architecture (JCA)

The Java Cryptography Architecture (JCA) is a framework for working with cryptography using the Java programming language. It forms part of the Java security API, and was first introduced in JDK 1.1 in the java.security package

Robocode

Robocode is an Open Source educational game by Mathew Nelson (originally R was provided by IBM). It is designed to help people learn to program in Java and enjoy the experience. It is very easy to start - a simple robot can be written in just a few minutes - but perfecting a bot can take months or more. Competitors write software that controls a miniature tank that fights other identically-built (but differently programmed) tanks in a playing field. Robots move, shoot at each other, scan for each other, and hit the walls (or other robots) if they aren t careful. Though the idea of this game may seem simple, the actual strategy needed to win is not.

Good robots have hundreds of lines in their code dedicated to strategy. Some of the more successful robots use techniques such as statistical analysis and attempts at neural networks in their designs. One can test a robot against many other competitors by downloading their bytecode, so design competition is fierce. Robocode provides a security sandbox (bots are restricted in what they can do on the machine they run on) which makes this a safe thing to do.

Voice Morphing

The method of transforming the source speaker�s speech to that of the target speaker is usually referred as Voice Morphing or voice transformation or voice conversion. Using the linear transformations estimated from time-aligned parallel training data, it transforms the spectral envelope of the potential speaker in tone with the target speaker. As the image morphing is analogous in nature, i.e. the source face smoothly changing its shape and texture to the target face, speech morphing also should smoothly change the source voice into another, keeping the shared characteristics of the starting and ending signals. The pitch and the envelope information are two factors that coincide in a speech signal, which needs to be separated. The method of cepstral analysis is usually employed to extract the same.

Sunday, January 3, 2010

XBL

eXtensible Bindings Language is used to declare the behavior and look of XUL widgets and XML elements. In XUL one defines the user interface layout of an application, and then (applying styles) can customize the look of elements. The drawback is that XUL provides no means to change an element's function. For example, one might want to change how the pieces of a scroll bar work. This is where XBL comes in.

An XBL file contains bindings. Each of them describes the behavior of a XUL widget or XML element. For example, a binding might be attached to a scroll bar. The behavior describes the properties and methods of the scroll bar and describes the XUL elements defining the scroll bar.

The root element of an XBL file is the element, which contains one or more elements. Each element declares one binding, which can be attached to any XUL element. It also may have an id attribute. A binding is assigned to an element by setting the CSS property -moz-binding to the URL of the bindings file.

The XBL 1.0 specification is used in the mozilla XPFE platform, alongside with the XUL XML language, so XBL is available on all mozilla products: Firefox, Thunderbird, SeaMonkey, etc. However, it must be noted that Mozilla implements a variant of XBL 1.0 which does not quite match the specification.

A XBL 2.0 version of the specification is on the way. The objectives of this version are to address problems of the 1.0 version, and to generalize XBL usage in non-mozilla browsers.

While the body of this version of the specification was created by the Mozilla Foundation, outside W3C (as was the case for the XBL 1.0 version), the W3C Web Application Formats Working Group is now guiding this specification along the W3C Recommendation track.

XPCOM

XPCOM similar to (Cross Platform Component Object Model) is a cross platform component modelCORBA or Microsoft COM. It has multiple language bindings and IDL descriptions so programmers can plug their custom functionality into the framework and connect it with other components.

XPCOM is one of the main things that makes the Mozilla application environment an actual framework. It is a development environment that provides the following features for the cross-platform software developer:

  • Component management
  • File abstraction
  • Object message passing
  • Memory management

This component object model makes virtually all of the functionality of Gecko available as a series of components, or reusable cross-platform libraries, that can be accessed from the web browser or scripted from any Mozilla application. Applications that want to access the various Mozilla XPCOM libraries (networking, security, DOM, etc.) use a special layer of XPCOM called XPConnect, which reflects the library interfaces into JavaScript (or other languages). XPConnect glues the front end to the C++ or C programming language-based components in XPCOM, and it can be extended to include scripting support for other languages: PyXPCOM already offers support for Python, PerlConnect provides support for Perl, and there are efforts underway to add .NET and Ruby language support for XPConnect.

On the developer side, XPCOM lets you write components in C++, C, JavaScript, Python, or other languages for which special bindings have been created, and compile and run those components on dozens of different platforms, including these and others where Mozilla itself is supported

Symfony

Symfony is a web application framework for PHP5 projects.

It aims to speed up the creation and maintenance of web applications, and to replace the repetitive coding tasks by power, control and pleasure.

The very small number of prerequisites make symfony easy to install on any configuration; you just need Unix or Windows with a web server and PHP 5 installed. It is compatible with almost every database system. In addition, it has a very small overhead, so the benefits of the framework don t come at the cost of an increase of hosting costs.

Using symfony is so natural and easy for people used to PHP and the design patterns of Internet applications that the learning curve is reduced to less than a day. The clean design and code readability will keep your delays short. Developers can apply agile development principles (such as DRY, KISS or the XP philosophy) and focus on applicative logic without losing time to write endless XML configuration files.

Symfony is aimed at building robust applications in an enterprise context. This means that you have full control over the configuration: from the directory structure to the foreign libraries, almost everything can be customized. To match your enterprise s development guidelines, symfony is bundled with additional tools helping you to test, debug and document your project.

Last but not least, by choosing symfony you get the benefits of an active open-source community. It is entirely free and published under the MIT license.

Symfony is sponsored by Sensio, a French Web Agency.

Z-Wave

Z-Wave is the interoperable wireless communication standard developed by Danish company Zensys and the Z-Wave Alliance. It is designed for low-power and low-bandwidth appliances, such as home automation and sensor networks
Radio specifications
Bandwidth: 9,600 bit/s or 40,000 bit/s, fully interoperable
Radio specifics
In Europe, the 868 MHz band has a 1% duty cycle limitation, meaning that a Z-wave unit can only transmit 1% of the time. This limitation is not present in the US 908 MHz band, but US legislation imposes a 1 mW transmission power limit (as opposed to 25 mW in Europe). Z-wave units can be in power-save mode and only be active 0.1% of the time, thus reducing power consumption dramatically.

Topology and routing
Z-wave uses an intelligent mesh network topology and has no master node. A message from node A to node C can be successfully delivered even if the two nodes are not within range providing that a third node B can communicate with nodes A and C. If the preferred route is unavailable, the message originator will attempt other routes until a path is found to the 'C' node. Therefore a Z-wave network can span much further than the radio range of a single unit. In order for Z-wave units to be able to route unsolicited messages, they cannot be in sleep mode. Therefore, most battery-operated devices will opt not to be repeater units. A Z-wave network can consist of up to 232 units with the option of bridging networks if more units are required.

Application areas
Due to the low bandwidth, Z-wave is not suitable for audio/video applications but is well suited for sensors and control units which typically only transmits a few bytes at a time.

i-DEN

Integrated Digital Enhanced Network, commonly referred to as iDEN, is a mobile communications technology, developed by Motorola, which provides its users the benefits of a trunked radio and a cellular telephone. Sprint Nextel is the largest US retailer of iDEN services. iDEN places more users in a given spectral space, compared to analog cellular systems, by using time division multiple access (TDMA). Up to six communication channels share a 25 kHz space; some competing technologies place only one channel in 12.5 kHz.
iDEN is available in other countries through several carriers including Nextel International. Nextel International allows direct connect (PTT operation) between users in several countries including the United States, Argentina,Brazil, Chile, Mexico, Peru, and Canada.
Countries which have operating iDEN networks not currently connected with the US include Jordan, Israel, the Philippines, Singapore, South Korea, Saudi Arabia, Japan, El Salvador and China (in selected areas).
Both data (such as paging, text messaging and picture messaging) and voice communications are supported by iDEN.
iDEN currently is at software release 13.0, supporting 3200 sites maximum per urban. Software release 13.4 has recently passed its test phase and is currently being rolled out nationally to the Sprint Nextel network. Software release 14.0 is currently in its test phase with Sprint Nextel and will introduce iDEN's next generation Dispatch Application Processor (DAP).
In order to provide high data rates for packet data, Nextel started to develop a 2.5G technology called WiDEN.
WiDEN is a planned expansion on the iDEN system, where instead of using a normal 25 kHz channel for packet data, it will encompass 4 carriers (100 kHz) into one channel. This would have allowed download speeds of 96 kbit/s, which is comparable to the average CDMA2000 1x speeds from American competitors Sprint and Verizon Wireless.
iDEN is a technology with no clear path for high speed wireless data. It is thought that as part of the Sprint Nextel merger, 1xEV-DO will become the infrastructure for 3G data to both Sprint and Nextel customers, as part of the transition to CDMA2000.
Following Nextel's merger with Sprint, iDEN may be phased out, however Nextel has stated they will support iDEN until at least 2010.
There is a smaller subset of the iDEN network called 'Harmony', which has a maximum limit of 30 sites.

Polymer memory

Polymer memory
Imagine a time when your mobile will be your virtual assistant and will need far more than the 8k and 16k memory that it has today, or a world where laptops require gigabytes of memory because of the impact of convergence on the very nature of computing. How much space would your laptop need to carry all that memory capacity? Not much, if Intel s project with Thin Film Electronics ASA (TFE) of Sweden works according to plan. TFE s idea is to use polymer memory modules rather than silicon-based memory modules, and what s more it s going to use architecture that is quite different from silicon-based modules.

While microchip makers continue to wring more and more from silicon, the most dramatic improvements in the electronics industry could come from an entirely different material plastic. Labs around the world are working on integrated circuits, displays for handheld devices and even solar cells that rely on electrically conducting polymers—not silicon—for cheap and flexible electronic components. Now two of the world’s leading chip makers are racing to develop new stock for this plastic microelectronic arsenal: polymer memory. Advanced Micro Devices of Sunnyvale, CA, is working with Coatue, a startup in Woburn, MA, to develop chips that store data in polymers rather than silicon. The technology, according to Coatue CEO Andrew Perlman, could lead to a cheaper and denser alternative to flash memory chips—the type of memory used in digital cameras and MP3 players. Meanwhile, Intel is collaborating with Thin Film Technologies in Linkping, Sweden, on a similar high capacity polymer memory.

Penetration usually involves a change of some kind, like a new port has been opened or a new service. The most common change you can see is that a file has changed. If you can identify the key subsets of these files and monitor them on a daily basis, then we will be able to detect whether any intrusion took place. Tripwire is an open source program created to monitor the changes in a key subset of files identified by the user and report on any changes in any of those files. When changes made are detected, the system administrator is informed. Tripwire ‘s principle is very simple, the system administrator identifies key files and causes tripwire to record checksum for those files. He also puts in place a cron job, whose job is to scan those files at regular intervals (daily or more frequently), comparing to the original checksum.

Any changes, addition or deletion, are reported to the administrator. The administrator will be able to determine whether the changes were permitted or unauthorized changes. If it was the earlier case the n the database will be updated so that in future the same violation wouldn’t be repeated. In the latter case then proper recovery action would be taken immediately

BiCMOS

The history of semiconductor devices starts in 1930’s when Lienfed and Heil first proposed the mosfet. However it took 30 years before this idea was applied to functioning devices to be used in practical applications, and up to the late 1980 this trend took a turn when MOS technology caught up and there was a cross over between bipolar and MOS share.CMOS was finding more wide spread use due to its low power dissipation, high packing density and simple design, such that by 1990 CMOS covered more than 90% of total MOS scale.

In 1983 bipolar compatible process based on CMOS technology was developed and BiCMOS technology with both the MOS and bipolar device fabricated on the same chip was developed and studied. The objective of the BiCMOS is to combine bipolar and CMOS so as to exploit the advantages of both at the circuit and system levels. Since 1985, the state-of-the-art bipolar CMOS structures have been converging. Today BiCMOS has become one of the dominant technologies used for high speed, low power and highly functional VLSI circuits especially when the BiCMOS process has been enhanced and integrated in to the CMOS process without any additional steps. Because the process step required for both CMOS and bipolar are similar, these steps cane be shared for both of them
The history of semiconductor devices starts in 1930’s when Lienfed and Heil first proposed the mosfet. However it took 30 years before this idea was applied to functioning devices to be used in practical applications, and up to the late 1980 this trend took a turn when MOS technology caught up and there was a cross over between bipolar and MOS share.CMOS was finding more wide spread use due to its low power dissipation, high packing density and simple design, such that by 1990 CMOS covered more than 90% of total MOS scale.

In 1983 bipolar compatible process based on CMOS technology was developed and BiCMOS technology with both the MOS and bipolar device fabricated on the same chip was developed and studied. The objective of the BiCMOS is to combine bipolar and CMOS so as to exploit the advantages of both at the circuit and system levels. Since 1985, the state-of-the-art bipolar CMOS structures have been converging. Today BiCMOS has become one of the dominant technologies used for high speed, low power and highly functional VLSI circuits especially when the BiCMOS process has been enhanced and integrated in to the CMOS process without any additional steps. Because the process step required for both CMOS and bipolar are similar, these steps cane be shared for both of them

FLUORESCENT MULTILAYER DISC (FMD)

FLUORESCENT MULTILAThe demand for digital storage capacity exceeds a growth of 60% per annum. Facilities like storage area networks, data warehouses, supercomputers and e-commerce related data mining requires much greater capacity to process the volume of data. YER DISC (FMD)
Further, with the advent of high bandwidth Internet and data-intensive applications such as high-definition TV (HDTV) and video & music on-demand, even smaller devices such as personal VCRs, PDAs, mobile phones etc will require multi-gigabyte and terabyte capacity in the next couple of years.

This ever increasing capacity demand can only be managed by the steady increase in the areal density of the magnetic and optical recording media. In future,this density increase is possible by taking advantage of the shorter wavelength lasers, higher lens numerical aperture (NA) or by employing near-field techniques. Today, the optical data storage capacities have been increased by creating double-sided media. This approach for increasing the effective storage capacity is quite unique for optical memory technologies. Fluorescent multilayer disc (FMD) is a three-dimensional storage for large amount of data. This three-dimensional optical storage opens up another dimension of increasing the capacity of a given volume of media, with the objective of achieving a cubic storage element, having the dimensions of writing / reading laser wavelength. The current wavelength of 650 µm should be sufficient enough to store up to a Terabyte of data

Surface-Conduction Electron-Emitter Display (SED)

A Surface-conduction Electron-emitter Display (SED) is a flat panel display technology that uses surface conduction electron emitters for every individual display pixel. The surface conduction electron emitter emits electrons, that excite a phosphor coating on the display panel which is similar to the basic concept found in traditional cathode ray tube (CRT) televisions. This means that SEDs can combine the slim form factor of LCDs with the high contrast ratios and can also refresh rates making the picture quality of CRTs better .The researches so far claim that the SED consumes less power than the LCD displays. The surface conduction electron emitter apparatus consists of a thin slit, across which electrons tunnel when excited by moderate voltages (tens of volts).

When the electrons cross electric poles across the thin slit, some are scattered at the receiving pole and are accelerated towards the display surface by a large voltage gradient (tens of kV) between the display panel and the surface conduction electron emitter apparatus. The SED display offer brightness, color performance and viewing angles on par with CRTs. However, they do not require a deflection system for the electron beam. Engineers as a result can create a display that is just a few inches thick which is still light enough for wall-hanging designs. The manufacturer can enlarge the panel merely by increasing the number of electron emitters relative to the necessary number of pixels. Since 1987, SED technology has been developing. Canon and Toshiba are the two major companies working on SEDs

DLP Projector

Powered by digital electronics, this optical solution facilitates the entire digital connection between a graphic or video source and the screen, in movie video projectors, televisions, home theatre systems and business video projectors. Digital Micro mirror Device or DLP® chip which is a rectangular array of up to 2 million hinge- mounted microscopic mirrors (each of these micro mirrors measures less than one-fifth the width of a human hair) controls the light in optical semiconductor of the DLP projection system.

A digital image is projected on to the screen or surface using the mirror just by synchronising the digital video or graphic signal, a light source, and a projection lens. The edge of DLP projectors compared with the conventional projection systems are. 1) digital grayscale and color reproduction is carried out because of the digital nature, which makes DLP the final link in the digital video infrastructure 2) More efficient than competing transmissive LCD technologies as DLP has its roots in reflective DMD.3) Capacity to create seamless, film like images. So with DLP, EXPERIENCE THE DIGITAL REVLOUTION

CCD vs. CMOS – Image

Posing a great challenge to the traditional Charge Coupled Devices (CCD) in various applications, CMOS image sensors have improvised themselves with time, finding solutions for the problems related with the noise and sensitivity. The use of Active Pixel Sensors having its foundation with the sub-micron technologies have helped to attain low power, low voltage and monolithic integration allowing. The manufacture of miniaturised single-chip digital cameras is an example of this technology.

The incorporation of advanced techniques at the chip or pixel level has opened new dimensions for the technology. Now after a decade, the initial tussle over the advocacy regarding the emergence of complementary-metal-oxide-semiconductor (CMOS) technology over the charge-coupled device (CCD) have slowly dropped showing the strengths and weakness of the technologies.

Abstract Cell

This project of a multihearted CPU or the super computer on a chip, which has opened a new dimension in the era of devices, is a combined undertaking by the corporates giants like the Sony, Thosiba and the IBM (STI). This innovative technology will find its applications in Playstation 3 video game console of Sony, in replacing the existing processors, in the broadband network technology, in boosting the performance of the existing electronic devices. As per the details available this, single chip can perform I trillion floating point operations per second, 1 TFLOP, which is several hundred times faster than a high-end personal computer.

Give more room space for additional hardware resources to perform parallel computations rather than allowing the single threaded performance is the core concept of this project. This means, only minimum resources are allocated to perform the single threaded operations compared to performing more parallelizable multimedia-type computations like the multiple DSP-like processing elements
This project of a multihearted CPU or the super computer on a chip, which has opened a new dimension in the era of devices, is a combined undertaking by the corporates giants like the Sony, Thosiba and the IBM (STI). This innovative technology will find its applications in Playstation 3 video game console of Sony, in replacing the existing processors, in the broadband network technology, in boosting the performance of the existing electronic devices. As per the details available this, single chip can perform I trillion floating point operations per second, 1 TFLOP, which is several hundred times faster than a high-end personal computer.

Give more room space for additional hardware resources to perform parallel computations rather than allowing the single threaded performance is the core concept of this project. This means, only minimum resources are allocated to perform the single threaded operations compared to performing more parallelizable multimedia-type computations like the multiple DSP-like processing elements

AdBrite

Your Ad Here