Hello Guest. Sign Up to view and download full seminar reports               


Electronics Topics Category

Artificial Passenger

Added on: March 6th, 2017 by Afsal Meerankutty No Comments

The AP is an artificial intelligence–based companion that will be resident in software and chips embedded in the automobile dashboard. The heart of the system is a conversation planner that holds a profile of you, including details of your interests and profession.

A microphone picks up your answer and breaks it down into separate words with speech-recognition software. A camera built into the dashboard also tracks your lip movements to improve the accuracy of the speech recognition. A voice analyzer then looks for signs of tiredness by checking to see if the answer matches your profile. Slow responses and a lack of intonation are signs of fatigue.

This research suggests that we can make predictions about various aspects of driver performance based on what we glean from the movements of a driver’s eyes and that a system can eventually be developed to capture this data and use it to alert people when their driving has become significantly impaired by fatigue.

MOCT (Magneto-Optical Current Transformer)

Added on: February 7th, 2017 by Afsal Meerankutty No Comments

An accurate current transducer is a key component of any power system instrumentation. To measure currents, power stations and substations conventionally employ inductive type current transformers. With short circuit capabilities of power system getting larger and the voltage level going higher the conventional current transducers becomes more bulky and costly.

It appears that newly emerged MOCT technology provides a solution for many of the problems by the conventional current transformers. MOCT measures the rotation angle of the plane polarized lights caused by the magnetic field and convert it into a signal of few volts proportional to the magnetic field.

Main advantage of an MOCT is that there is no need to break the conductor to enclose the optical path in the current carrying circuit and there is no electromagnetic interference.


Added on: January 25th, 2017 by Afsal Meerankutty No Comments

Generally when most people think about electronics, they may initially think of products such as cell phones, radios, laptop computers, etc. others, having some engineering background, may think of resistors, capacitors, etc. which are the basic components necessary for electronics to function. Such basic components are fairly limited in number and each having their own characteristic function.

Memristor theory was formulated and named by Leon Chua in a 1971 paper. Chua strongly believed that a fourth device existed to provide conceptual symmetry with the resistor, inductor, and capacitor. This symmetry follows from the description of basic passive circuit elements as defined by a relation between two of the four fundamental circuit variables. A device linking charge and flux (themselves defined as time integrals of current and voltage), which would be the memristor, was still hypothetical at the time. However, it would not be until thirty-seven years later, on April 30, 2008, that a team at HP Labs led by the scientist R. Stanley Williams would announce the discovery of a switching memristor. Based on a thin film of titanium dioxide, it has been presented as an approximately ideal device.

The reason that the memristor is radically different from the other fundamental circuit elements is that, unlike them, it carries a memory of its past. When you turn off the voltage to the circuit, the memristor still remembers how much was applied before and for how long. That’s an effect that can’t be duplicated by any circuit combination of resistors, capacitors, and inductors, which is why the memristor qualifies as a fundamental circuit element.

The arrangement of these few fundamental circuit components form the basis of almost all of the electronic devices we use in our everyday life. Thus the discovery of a brand new fundamental circuit element is something not to be taken lightly and has the potential to open the door to a brand new type of electronics. HP already has plans to implement memristors in a new type of non-volatile memory which could eventually replace flash and other memory systems

Pill Camera

Added on: January 12th, 2017 by Afsal Meerankutty No Comments

The aim of technology is to make products in a large scale for cheaper prices and increased quality. The current technologies have attained a part of it, but the manufacturing technology is at macro level. The future lies in manufacturing product right from the molecular level. Research in this direction started way back in eighties. At that time manufacturing at molecular and atomic level was laughed about. But due to advent of nanotechnology we have realized it to a certain level. One such product manufactured is PILL CAMERA, which is used for the treatment of cancer, ulcer and anemia. It has made revolution in the field of medicine. This tiny capsule can pass through our body, without causing any harm.

It takes pictures of our intestine and transmits the same to the receiver of the Computer analysis of our digestive system. This process can help in tracking any kind of disease related to digestive system. Also we have discussed the drawbacks of PILL CAMERA and how these drawbacks can be overcome using Grain sized motor and bi-directional wireless telemetry capsule .Besides this we have reviewed the process of manufacturing products using nanotechnology.Some other important applications are also discussed along with their potential impacts on various fields.

We have made great progress in manufacturing products. Looking back from where we stand now, we started from flint knives and stone tools and reached the stage where we make such tools with more precision than ever. The leap in technology is great but it is not going to stop here. With our present technology we manufacture products by casting, milling, grinding, chipping and the likes. With these technologies we have made more things at a lower cost and greater precision than ever before. In the manufacture of these products we have been arranging atoms in great thundering statistical herds. All of us know manufactured products are made from atoms. The properties of those products depend on how those atoms are arranged. If we rearrange atoms in dirt, water and air we get grass. The next step in manufacturing technology is to manufacture products at molecular level. The technology used to achieve manufacturing at molecular level is “NANOTECHNOLOGY”. Nanotechnology is the creation of useful materials, devices and system through manipulation of such miniscule matter (nanometer).Nanotechnology deals with objects measured in nanometers. Nanometer can be visualized as billionth of a meter or millionth of a millimeter or it is 1/80000 width of human hair.


Added on: December 26th, 2016 by Afsal Meerankutty No Comments

BrainGate is a brain implant system developed by the bio-tech company Cyberkinetics in 2003 in conjunction with the Department of Neuroscience at Brown University. The device was designed to help those who have lost control of their limbs, or other bodily functions, such as patients with amyotrophic lateral sclerosis (ALS) or spinal cord injury. The computer chip, which is implanted into the brain, monitors brain activity in the patient and converts the intention of the user into computer commands. Cyberkinetics describes that “such applications may include novel communications interfaces for motor impaired patients, as well as the monitoring and treatment of certain diseases which manifest themselves in patterns of brain activity, such as epilepsy and depression.”

The Braingate Neural Interface device consists of a tiny chip containing 100 microscopic electrodes that is surgically implanted in the brain’s motor cortex. The whole apparatus is the size of a baby aspirin. The chip can read signals from the motor cortex, send that information to a computer via connected wires, and translate it to control the movement of a computer cursor or a robotic arm. According to Dr. John Donaghue of Cyberkinetics, there is practically no training required to use BrainGate because the signals read by a chip implanted, for example, in the area of the motor cortex for arm movement, are the same signals that would be sent to the real arm. A user with an implanted chip can immediately begin to move a cursor with thought alone.

The BrainGate technology platform was designed to take advantage of the fact that many patients with motor impairment have an intact brain that can produce movement commands. This may allow the BrainGate system to create an output signal directly from the brain, bypassing the route through the nerves to the muscles that cannot be used in paralysed people.


Added on: November 3rd, 2013 by Afsal Meerankutty 10 Comments

DakNet provides extraordinarily low-cost digital communication, letting remote villages leapfrog past the expense of traditional connectivity solutions and begin development of a full coverage broadband wireless infrastructure. DakNet, an ad hoc network that uses wireless technology to provide asynchronous digital connectivity, is evidence that the marriage of wireless and asynchronous service may indeed be the beginning of a road to universal broadband connectivity.
This paper briefly explains about what are DakNet, how wireless technology implemented with DakNet, its fundamental operations and its applications, cost estimation, advantages and disadvantages and finally how to connect Indian villages with town city and global markets.

BiCMOS Technology

Added on: November 3rd, 2013 by Afsal Meerankutty No Comments

The need for high-performance, low-power, and low-cost systems for network transport and wireless communications is driving silicon technology toward higher speed, higher integration, and more functionality. Further more, this integration of RF and analog mixed-signal circuits into high-performance digital signal-processing (DSP) systems must be done with minimum cost overhead to be commercially viable. While some analog and RF designs have been attempted in mainstream digital-only complimentary metal-oxide semiconductor (CMOS) technologies, almost all designs that require stringent RF performance use bipolar or semiconductor technology. Silicon integrated circuit (IC) products that, at present, require modern bipolar or BiCMOS silicon technology in wired application space include the essential optical network (SONET) and synchronous digital hierarchy (SDH) operating at 10 Gb/s and higher.

The viability of a mixed digital/analog. RF chip depends on the cost of making the silicon with the required elements; in practice, it must approximate the cost of the CMOS wafer, Cycle times for processing the wafer should not significantly exceed cycle times for a digital CMOS wafer. Yields of the SOC chip must be similar to those of a multi-chip implementation. Much of this article will examine process techniques that achieve the objectives of low cost, rapid cycle time, and solid yield.

Space Time Adaptive Processing

Added on: October 31st, 2013 by Afsal Meerankutty 2 Comments

Space-time adaptive processing (STAP) is a signal processing technique most commonly used in radar systems. It involves adaptive array processing algorithms to aid in target detection. Radar signal processing benefits from STAP in areas where interference is a problem (i.e. ground clutter, jamming, etc.). Through careful application of STAP, it is possible to achieve order-of-magnitude sensitivity improvements in target detection.
STAP involves a two-dimensional filtering technique using a phased-array antenna with multiple spatial channels. Coupling multiple spatial channels with pulse-Doppler waveforms lends to the name “space-time.” Applying the statistics of the interference environment, an adaptive STAP weight vector is formed. This weight vector is applied to the coherent samples received by the radar.
In a ground moving target indicator (GMTI) system, an airborne radar collects the returned echo from the moving target on the ground. However, the received signal contains not only the reflected echo from the target, but also the returns from the illuminated ground surface. The return from the ground is generally referred to as clutter.
The clutter return comes from all the areas illuminated by the radar beam, so it occupies all range bins and all directions. The total clutter return is often much stronger than the returned signal echo, which poses a great challenge to target detection. Clutter filtering, therefore, is a critical part of a GMTI system.


Added on: October 31st, 2013 by Afsal Meerankutty 2 Comments

A biosensor is a device for the detection of an analytic that combines a biological component with a physicochemical detector component. Many optical biosensors based on the phenomenon of surface plasmon resonance are evanescent wave techniques . The most widespread example of a commercial biosensor is the blood glucose biosensor, which uses the enzyme glucose oxidase to break blood glucose down.
Bio sensors are the combination of bio receptor and transducer. The bio receptor is a biomolecule that identifies the target whereas transducer converts the identified target into the measurable signal. Biosensors are used in the market in many diverse areas. They are also used in the clinical test in one of the biggest diagnostic market of 4000 million in US$.
They are very useful to measure the specific thing with great accuracy. Its speed can be directly measured. They are very simple. Receptors and transducer are integrated into single sensors without using reagents.

Haptic Systems

Added on: October 10th, 2013 by Afsal Meerankutty 5 Comments

‘Haptics’ is a technology that adds the sense of touch to virtual environments. Users are given the illusion that they are touching or manipulating a real physical object.
This seminar discusses the important concepts in haptics, some of the most commonly used haptics systems like ‘Phantom’, ‘Cyberglove’, ‘Novint Falcon’ and such similar devices. Following this, a description about how sensors and actuators are used for tracking the position and movement of the haptic systems, is provided.
The different types of force rendering algorithms are discussed next. The seminar explains the blocks in force rendering. Then a few applications of haptic systems are taken up for discussion.

Black Box

Added on: October 8th, 2013 by Afsal Meerankutty 5 Comments

As the technology progressing, the speed of traveling is also increased. The source to destination became so closer to each others. The main advancement in the field of the air traveling system with the help of airplane. This is the major discovery of technology. But as the speed increases , the horror of air crash also introduced. Because at a height of 2000m and above if a plane crashes ,it will be a terror for any body. So to take the feed back of the various activities happens in the plane and record them engineers need a mechanism to record such activities .
With any airplane crash, there are many unanswered questions as to what brought the plane down. Investigators turn to the airplane’s flight data recorder (FDR) and cockpit voice recorder (CVR), also known as “black boxes,” for answers. In Flight 261, the FDR contained 48 parameters of flight data, and the CVR recorded a little more than 30 minutes of conversation and other audible cockpit noises.

Underwater Communication Systems

Added on: October 3rd, 2013 by Afsal Meerankutty 7 Comments

There is a high demand for underwater communication systems due to the increase in current human underwater activities. Underwater communication systems employ either sonar or electromagnetic waves as a means of transferring signals. These waves are different physically and electrically, and thus the systems that employ them also differ in their design architecture, wave propagation and devices used for emission and reception. As a result, the two systems have varying advantages and limitations. This paper presents an in-depth review of underwater communication based on sonar and electromagnetic waves, a comparison of the two systems and a discussion of the environmental impacts of using these waves for underwater communication. In the tradeoff between preserving the underwater environment and the need for underwater communication, it appears that underwater electromagnetic wave communication has the most potential to be the environmentally-friendly system of the future.

Sonar and Acoustic Waves Communication

Added on: October 3rd, 2013 by Afsal Meerankutty 1 Comment

In past previous years, the demand of underwater communication increases due to interest and underwater activities of human being. Underwater communication done with the help of sonar waves, electromagnetic waves and acoustic waves, these three waves are different in nature. This paper present an overview of sonar and acoustic waves underwater communication .In this it is also show acoustic wave communication is better than sonar wave communication. Addition with this the factors, which affect the acoustic wave communication, also explained.

Optical Fibers

Added on: March 25th, 2012 by Afsal Meerankutty 1 Comment

An optical fiber (or fibre) is a glass or plastic fiber that carries light along its length. Fiber optics is the overlap of applied science and engineering concerned with the design and application of optical fibers. Optical fibers are widely used in fiber-optic communications, which permits transmission over longer distances and at higher bandwidths (data rates) than other forms of communications. Fibers are used instead of metal wires because signals travel along them with less loss, and they are also immune to electromagnetic interference. Fibers are also used for illumination, and are wrapped in bundles so they can be used to carry images, thus allowing viewing in tight spaces. Specially designed fibers are used for a variety of other applications, including sensors and fiber lasers.

Light is kept in the core of the optical fiber by total internal reflection. This causes the fiber to act as a waveguide. Fibers which support many propagation paths or transverse modes are called multi-mode fibers (MMF), while those which can only support a single mode are called single-mode fibers (SMF). Multi-mode fibers generally have a larger core diameter, and are used for short-distance communication links and for applications where high power must be transmitted. Single-mode fibers are used for most communication links longer than 550 meters (1,800 ft).

Optical Fiber Communication System

Added on: March 25th, 2012 by Afsal Meerankutty No Comments

Communication is an important part of our daily life. The communication process involves information generation, transmission, reception and interpretation. As needs for various types of communication such as voice, images, video and data communications increase demands for large transmission capacity also increase. This need for large capacity has driven the rapid development of light wave technology; a worldwide industry has developed. An optical or light wave communication system is a system that uses light waves as the carrier for transmission. An optical communication system mainly involves three parts. Transmitter, receiver and channel. In optical communication transmitters are light sources, receivers are light detectors and the channels are optical fibers. In optical communication the channel i.e, optical fibers play an important role because it carries the data from transmitter to the receiver. Hence, here we shall discuss mainly about optical fibers.

Solar Power Satellites

Added on: March 22nd, 2012 by Afsal Meerankutty 5 Comments

The new millennium has introduced increased pressure for finding new renewable energy sources. The exponential increase in population has led to the global crisis such as global warming, environmental pollution and change and rapid decrease of fossil reservoirs. Also the demand of electric power increases at a much higher pace than other energy demands as the world is industrialized and computerized. Under these circumstances, research has been carried out to look into the possibility of building a power station in space to transmit electricity to Earth by way of radio waves-the Solar Power Satellites. Solar Power Satellites(SPS) converts solar energy in to micro waves and sends that microwaves in to a beam to a receiving antenna on the Earth for conversion to ordinary electricity.SPS is a clean, large-scale, stable electric power source. Solar Power Satellites is known by a variety of other names such as Satellite Power System, Space Power Station, Space Power System, Solar Power Station, Space Solar Power Station etc. One of the key technologies needed to enable the future feasibility of SPS is that of Microwave Wireless Power Transmission.WPT is based on the energy transfer capacity of microwave beam i.e; energy can be transmitted by a well focused microwave beam. Advances in Phased array antennas and rectennas have provided the building blocks for a realizable WPT system.

Tri-Gate Transistors

Added on: March 18th, 2012 by Afsal Meerankutty No Comments

Tri-Gate transistors, the first to be truly three-dimensional, mark a major revolution in the Semiconductor industry. The semiconductor industry continues to push technological innovation to keep pace with Moore’s Law, shrinking transistors so that ever more can be packed on a chip. However, at future technology nodes, the ability to shrink transistors becomes more and more problematic, in part due to worsening short channel effects and an increase in parasitic leakages with scaling of the gate-length dimension. In this regard Tri-gate transistor architecture makes it possible to continue Moore’s law at 22nm and below without a major transistor redesign. The physics, technology and the advantages of the device is briefly discussed in this paper.

Electromagnetic Transducer

Added on: March 17th, 2012 by Afsal Meerankutty No Comments

This paper describes a novel electromagnetic
transducer called the Four Quadrant Transducer (4QT) for
hybrid electric vehicles. The system consists of one electrical
machine unit (including two rotors) and two inverters, which
enable the vehicle’s Internal Combustion Engine (ICE) to run at
its optimum working points regarding efficiency, almost
independently of the changing load requirements at the wheels. In
other words the ICE is operated at high torque and low speed as
much as possible. As a consequence, reduced fuel consumption
will be achieved.

The basic structure of the Four Quadrant Transducer system,
simulation results and ideas about suitable topologies for
designing a compact machine unit are reported. The simulated
system of a passenger car is equipped with a single step gearbox
making it simple and cost effective. Since the engine is not
mechanically connected to the wheels and the electrical
components have lower power ratings than the engine itself, the
system takes advantage of the best characteristics of the series-
and the parallel hybrid, respectively. The proposed concept looks
promising and fuel savings of more than 40% compared with a
conventional vehicle can be achieved.

Threats of HEMP and HPM Devices

Added on: March 16th, 2012 by Afsal Meerankutty No Comments

Electromagnetic Pulse (EMP) is an instantaneous, intense energy field that can overload or disrupt at a distance numerous electrical systems and high technology microcircuits, which are especially sensitive to power surges. A large scale EMP effect can be produced by a single nuclear explosion detonated high in the atmosphere. This method is referred to as High-Altitude EMP (HEMP). A similar, smaller-scale EMP effect can be created using non-nuclear devices with powerful batteries or reactive chemicals. This method is called High Power Microwave (HPM). Several nations, including reported sponsors of terrorism, may currently have a capability to use EMP as a weapon for cyber warfare or cyber terrorism to disrupt communications and other parts of the U.S. critical infrastructure. Also, some equipment and weapons used by the U.S. military may be vulnerable to the effects of EMP.

The threat of an EMP attack against the United States is hard to assess, but some observers indicate that it is growing along with worldwide access to newer technologies and the proliferation of nuclear weapons. In the past, the threat of mutually assured destruction provided a lasting deterrent against the exchange of multiple high-yield nuclear warheads. However, now even a single, specially- designed low-yield nuclear explosion high above the United States, or over a battlefield, can produce a large-scale EMP effect that could result in a widespread loss of electronics, but no direct fatalities, and may not necessarily evoke a large nuclear retaliatory strike by the U.S. military. This, coupled with the possible vulnerability of U.S. commercial electronics and U.S. military battlefield equipment to the effects of EMP, may create a new incentive for other countries to develop or acquire a nuclear capability.

Policy issues raised by this threat include (1) what is the United States doing to protect civilian critical infrastructure systems against the threat of EMP, (2) how does the vulnerability of U.S. civilian and military electronics to EMP attack encourage other nations to develop or acquire nuclear weapons, and (3) how likely are terrorist organizations to launch a smaller-scale EMP attack against the United States?

Electronic Fuel Injection System

Added on: March 15th, 2012 by Afsal Meerankutty 1 Comment

In developed and developing countries considerable emphasis is being laid on the minimization of pollutants from internal combustion engines. A two-stroke cycle engine produces a considerable amount of pollutants when gasoline is used as a fuel due to short-circuiting. These pollutants, which include unburnt hydrocarbons and carbon monoxide, which are harmful to beings. There is a strong need to develop a kind of new technology which could minimize pollution from these engines.

Direct fuel injection has been demonstrated to significantly reduce unburned hydrocarbon emissions by timing the injection of fuel in such way as to prevent the escape of unburned fuel from the exhaust port during the scavenging process.

The increased use of petroleum fuels by automobiles has not only caused fuel scarcities, price hikes, higher import bills, and economic imbalance but also causes health hazards due to its toxic emissions. Conventional fuels used in automobiles emit toxic pollutants, which cause asthma, chronic cough, skin degradation, breathlessness, eye and throat problems, and even cancer.

In recent years, environmental improvement (CO2, NOx and Ozone reduction) and energy issues have become more and more important in worldwide concerns. Natural gas is a good alternative fuel to improve these problems because of its abundant availability and clean burning characteristics.


Added on: March 15th, 2012 by Afsal Meerankutty 6 Comments

Gi-Fi or Gigabit Wireless is the world’s first transceiver integrated on a single chip that operates at 60GHz on the CMOS process. It will allow wireless transfer of audio and video data up to 5 gigabits per second, ten times the current maximum wireless transfer rate, at one-tenth of the cost, usually within a range of 10 meters. It utilizes a 5mm square chip and a 1mm wide antenna burning less than 2m watts of power to transmit data wirelessly over short distance, much like Bluetooth.

Gi-Fi will helps to push wireless communications to faster drive. For many years cables ruled the world. Optical fibers played a dominant role for its higher bit rates and faster transmission. But the installation of cables caused a greater difficulty and thus led to wireless access. The foremost of this is Bluetooth which can cover 9-10mts. Wi-Fi followed it having coverage area of 91mts. No doubt, introduction of Wi-Fi wireless networks has proved a revolutionary solution to “last mile” problem. However, the standard’s original limitations for data exchange rate and range, number of changes, high cost of the infrastructure have not yet made it possible for Wi-Fi to become a total threat to cellular networks on the one hand, and hard-wire networks, on the other. But the man’s continuous quest for even better technology despite the substantial advantages of present technologies led to the introduction of new, more up-to-date standards for data exchange rate i.e., Gi-Fi.

The development will enable the truly wireless office and home of the future. As the integrated transceiver is extremely small, it can be embedded into devices. The breakthrough will mean the networking of office and home equipment without wires will finally become a reality.
In this book we present a low cost, low power and high broadband chip, which will be vital in enabling the digital economy of the future.


Added on: March 15th, 2012 by Afsal Meerankutty No Comments

There are several ways of building a remote sketching system. One way is to use a tablet and a stylus to input the sketch, and a computer monitor to display the sketch at the remote site. Such systems have a number of disadvantages. Writing with a stylus on a glass tablet is unnatural compared to sketching with a regular pen and paper. Shading and other effects are harder to achieve. Changing color means using the computer to select a new color. Incorporating existing hard-copy documents such as a graded exam is impossible.

Another way of building a remote sketching system is to use a video camera to image the sketch at one end, transmit the captured video to the other end, and displays it there using an LCD projector. See Figure 1 for a schematic diagram of how such a system might operate. The first such camera-projector based remote sketching system was Pierre Wellner’s Xerox “Double Digital Desk”

ARM Processor

Added on: March 14th, 2012 by Afsal Meerankutty 1 Comment

An ARM processor is any of several 32-bit RISC (reduced instruction set computer) microprocessor s developed by Advanced RISC Machines, Ltd. The ARM architecture was originally conceived by Acorn Computers Ltd. in the 1980s. Since then, it has evolved into a family of microprocessors extensively used in consumer electronic devices such as mobile phone s, multimedia players, pocket calculator s and PDA s (personal digital assistants).
ARM processor features include:

  • Load/store architecture
  • An orthogonal instruction set
  • Mostly single-cycle execution
  • A 16×32-bit register
  • Enhanced power-saving design

ARM provides developers with intellectual property (IP) solutions in the form of processors, physical IP, cache and SoC designs, application-specific standard products (ASSPs), related software and development tools — everything you need to create an innovative product design based on industry-standard components that are ‘next generation’ compatible.

IRIS Recognition

Added on: March 12th, 2012 by Afsal Meerankutty 1 Comment

Iris recognition is an automated method of capturing a person’s unique biological data that distinguishes him or her from another individual. It has emerged as one of the most powerful and accurate identification techniques in the modern world. It has proven to be most fool proof technique for the identification of individuals without the use of cards, PINs and passwords. It facilitates automatic identification where by electronic transactions or access to places, information or accounts are made easier, quicker and more secure.

A method for rapid visual recognition of personal identity is described, based on the failure of statistical test of independence. The most unique phenotypic feature visible in a person’s face is the detailed texture of each eye’s iris: an estimate of its statistical complexity in a sample of the human population reveals variation corresponding to several hundred independent degrees-of-freedom. Morphogenetic randomness in the texture expressed phenotypically in the iris trabeclar meshwork ensures that a test of statistical independence on two coded patterns organizing from different eyes is passed almost certainly, whereas the same test is failed almost certainly when the compared codes originate from the same eye. The visible texture of a person’s iris in a real time video image is encoded into a compact sequence of multi-scale quadrature 2-D Gabor wavelet coefficients, whose most significant bits comprise a 512 – byte “IRIS–CODE” statistical decision theory generates identification decisions from Exclusive-OR comparisons of complete iris code at the rate of 4,000 per second, including calculation of decision confidence levels. The distributions observed empirically in such comparisons imply a theoretical “cross-over” error rate of one in 1,31,000 when a decision criterion is adopted that would equalize the False Accept and False Reject error rates.

Artificial Eye

Added on: March 3rd, 2012 by Afsal Meerankutty No Comments

The retina is a thin layer of neural tissue that lines the back wall inside the eye. Some of these cells act to receive light, while others interpret the information and send messages to the brain through the optic nerve. This is part of the process that enables us to see. In damaged or dysfunctional retina, the photoreceptors stop working, causing blindness. By some estimates, there are more than 10 million people worldwide affected by retinal diseases that lead to loss of vision.

The absence of effective therapeutic remedies for retinitis pigmentosa (RP) and age-related macular degeneration (AMD) has motivated the development of experimental strategies to restore some degree of visual function to affected patients. Because the remaining retinal layers are anatomically spared, several approaches have been designed to artificially activate this residual retina and thereby the visual system.

At present, two general strategies have been pursued. The “Epiretinal” approach involves a semiconductor-based device placed above the retina, close to or in contact with the nerve fiber layer retinal ganglion cells. The information in this approach must be captured by a camera system before transmitting data and energy to the implant. The “Sub retinal” approach involves the electrical stimulation of the inner retina from the sub retinal space by implantation of a semiconductor-based micro photodiode array (MPA) into this location. The concept of the sub retinal approach is that electrical charge generated by the MPA in response to a light stimulus may be used to artificially alter the membrane potential of neurons in the remaining retinal layers in a manner to produce formed images.

Some researchers have developed an implant system where a video camera captures images, a chip processes the images, and an electrode array transmits the images to the brain. It’s called Cortical Implants.

Transparent Electronics

Added on: March 1st, 2012 by Afsal Meerankutty No Comments

Transparent electronics is an emerging science and technology field focused on producing ‘invisible’ electronic circuitry and opto-electronic devices. Applications include consumer electronics, new energy sources, and transportation; for example, automobile windshields could transmit visual information to the driver. Glass in almost any setting could also double as an electronic device, possibly improving security systems or offering transparent displays. In a similar vein, windows could be used to produce electrical power. Other civilian and military applications in this research field include realtime wearable displays. As for conventional Si/III–V-based electronics, the basic device structure is based on semiconductor junctions and transistors. However, the device building block materials, the semiconductor, the electric contacts, and the dielectric/passivation layers, must now be transparent in the visible –a true challenge! Therefore, the first scientific goal of this technology must be to discover, understand, and implement transparent high-performance electronic materials. The second goal is their implementation and evaluation in transistor and circuit structures. The third goal relates to achieving application-specific properties since transistor performance and materials property requirements vary, depending on the final product device specifications. Consequently, to enable this revolutionary technology requires bringing together expertise from various pure and applied sciences, including materials science, chemistry, physics, electrical/electronic/circuit engineering, and display science.

Plastic Memory

Added on: March 1st, 2012 by Afsal Meerankutty 1 Comment

A conducting plastic has been used to create a new memory technology which has the potential to store a mega bit of data in a millimeter- square device-10 times denser than current magnetic memories. This device is cheap and fast, but cannot be rewritten, so would only be suitable for permanent storage.

The device sandwiches a blob of a conducting polymer called PEDOT and a silicon diode between perpendicular wires.

The key to the new technology was discovered by passing high current through PEDOT (Polyethylenedioxythiophene) which turns it into an insulator, rather like blowing a fuse .The polymer has two possible states- conductor and insulator, that form the one and zero, necessary to store digital data.

However tuning the polymer into an insulator involves a permanent chemical change, meaning the memory can only be written once.

3D Television

Added on: February 28th, 2012 by Afsal Meerankutty No Comments

Three-dimensional TV is expected to be the next revolution in the TV history. They implemented a 3D TV prototype system with real-time acquisition transmission, & 3D display of dynamic scenes. They developed a distributed scalable architecture to manage the high computation & bandwidth demands. 3D display shows high-resolution stereoscopic color images for multiple viewpoints without special glasses. This is first real time end-to-end 3D TV system with enough views & resolution to provide a truly immersive 3D experience.Japan plans to make this futuristic television a commercial reality by 2020 as part of abroad national project that will bring together researchers from the government, technology companies and academia. The targeted “virtual reality” television would allow people to view high definition images in 3D from any angle, in addition to being able to touch and smell the objects being projected upwards from a screen to the floor.

Chameleon Chips

Added on: February 26th, 2012 by Afsal Meerankutty No Comments

Today’s microprocessors sport a general-purpose design which has its own advantages and disadvantages.

  • Adv: One chip can run a range of programs. That’s why you don’t need separate computers for different jobs, such as crunching spreadsheets or editing digital photos
  • Disadv: For any one application, much of the chip’s circuitry isn’t needed, and the presence of those “wasted” circuits slows things down.

Suppose, instead, that the chip’s circuits could be tailored specifically for the problem at hand–say, computer-aided design–and then rewired, on the fly, when you loaded a tax-preparation program. One set of chips, little bigger than a credit card, could do almost anything, even changing into a wireless phone. The market for such versatile marvels would be huge, and would translate into lower costs for users.

So computer scientists are hatching a novel concept that could increase number-crunching power–and trim costs as well. Call it the chameleon chip.

Chameleon chips would be an extension of what can already be done with field-programmable gate arrays (FPGAS).

An FPGA is covered with a grid of wires. At each crossover, there’s a switch that can be semipermanently opened or closed by sending it a special signal. Usually the chip must first be inserted in a little box that sends the programming signals. But now, labs in Europe, Japan, and the U.S. are developing techniques to rewire FPGA-like chips anytime–and even software that can map out circuitry that’s optimized for specific problems.

The chips still won’t change colors. But they may well color the way we use computers in years to come. it is a fusion between custom integrated circuits and programmable logic.in the case when we are doing highly performance oriented tasks custom chips that do one or two things spectacularly rather than lot of things averagely is used. Now using field programmed chips we have chips that can be rewired in an instant. Thus the benefits of customization can be brought to the mass market.

A reconfigurable processor is a microprocessor with erasable hardware that can rewire itself dynamically. This allows the chip to adapt effectively to the programming tasks demanded by the particular software they are interfacing with at any given time. Ideally, the reconfigurable processor can transform itself from a video chip to a central processing unit (cpu) to a graphics chip, for example, all optimized to allow applications to run at the highest possible speed. The new chips can be called a “chip on demand.” In practical terms, this ability can translate to immense flexibility in terms of device functions. For example, a single device could serve as both a camera and a tape recorder (among numerous other possibilities): you would simply download the desired software and the processor would reconfigure itself to optimize performance for that function.

Reconfigurable processors, competing in the market with traditional hard-wired chips and several types of programmable microprocessors. Programmable chips have been in existence for over ten years. Digital signal processors (DSPs), for example, are high-performance programmable chips used in cell phones, automobiles, and various types of music players.

Another version, programmable logic chips are equipped with arrays of memory cells that can be programmed to perform hardware functions using software tools. These are more flexible than the specialized DSP chips but also slower and more expensive. Hard-wired chips are the oldest, cheapest, and fastest – but also the least flexible – of all the options.


Added on: February 25th, 2012 by Afsal Meerankutty No Comments

iDEN is a mobile telecommunications technology, developed by Motorola, which provides its users the benefits of a trunked radio and a cellular telephone. iDEN places more users in a given spectral space, compared to analog cellular and two-way radio systems, by using speech compression and time division multiple access TDMA. Notably, iDEN is designed, and licensed, to operate on individual frequencies that may not be contiguous. iDEN operates on 25kHz channels, but only occupies 20 kHz in order to provide interference protection via guard bands. By comparison, TDMA Cellular (IS-54 and IS-136) is licensed in blocks of 30 kHz channels, but each emission occupies 40 kHz,and is capable of serving the same number of subscribers per channel as iDEN. iDEN supports either three or six interconnect users (phone users) per channel, and either six or twelve dispatch users (push-to-talk users) per channel. Since there is no Analogue component of iDEN, mechanical duplexing in the handset is unnecessary, so Time Domain Duplexing is used instead, the same way that other digital-only technologies duplex their handsets. Also, like other digital-only technologies, hybrid or cavity duplexing is used at the Base Station (Cellsite).


Added on: February 24th, 2012 by Afsal Meerankutty No Comments

Generally when most people think about electronics, they may initially think of products such as cell phones, radios, laptop computers, etc. others, having some engineering background, may think of resistors, capacitors, etc. which are the basic components necessary for electronics to function. Such basic components are fairly limited in number and each having their own characteristic function.

Memristor theory was formulated and named by Leon Chua in a 1971 paper. Chua strongly believed that a fourth device existed to provide conceptual symmetry with the resistor, inductor, and capacitor. This symmetry follows from the description of basic passive circuit elements as defined by a relation between two of the four fundamental circuit variables. A device linking charge and flux (themselves defined as time integrals of current and voltage), which would be the memristor, was still hypothetical at the time. However, it would not be until thirty-seven years later, on April 30, 2008, that a team at HP Labs led by the scientist R. Stanley Williams would announce the discovery of a switching memristor. Based on a thin film of titanium dioxide, it has been presented as an approximately ideal device.
The reason that the memristor is radically different from the other fundamental circuit elements is that, unlike them, it carries a memory of its past. When you turn off the voltage to the circuit, the memristor still remembers how much was applied before and for how long. That’s an effect that can’t be duplicated by any circuit combination of resistors, capacitors, and inductors, which is why the memristor qualifies as a fundamental circuit element.
The arrangement of these few fundamental circuit components form the basis of almost all of the electronic devices we use in our everyday life. Thus the discovery of a brand new fundamental circuit element is something not to be taken lightly and has the potential to open the door to a brand new type of electronics. HP already has plans to implement memristors in a new type of non-volatile memory which could eventually replace flash and other memory systems.

3 D ICs

Added on: February 16th, 2012 by Afsal Meerankutty 1 Comment

The unprecedented growth of the computer and the Information technology industry is demanding Very Large Scale Integrated (VLSI) circuits with increasing functionality and performance at minimum cost and power dissipation. VLSI circuits are being aggressively scaled to meet this Demand, which in turn has some serious problems for the semiconductor industry.

Additionally heterogeneous integration of different technologies in one single chip (SoC) is becoming increasingly desirable, for which planar (2-D) ICs may not be suitable.

3-D ICs are an attractive chip architecture that can alleviate the interconnect related problems such as delay and power dissipation and can also facilitate integration of heterogeneous technologies in one chip (SoC). The multi-layer chip industry opens up a whole new world of design. With the Introduction of 3-D ICs, the world of chips may never look the same again.


Added on: February 9th, 2012 by Afsal Meerankutty No Comments

Animatronics is a cross between animation and electronics. Basically, an animatronic is a mechanized puppet. It may be preprogrammed or remotely controlled. An abbreviated term originally coined by Walt Disney as “Audio-Animatronics” ( used to describe his mechanized characters ) ,can actually be seen in various forms as far back as Leonardo-Da-Vinci’s Automata Lion ,( theoretically built to present lillies to the King of France during one of his Visits ),and has now developed as a career which may require combined talent in Mechanical Engineering , Sculpting / Casting , Control Technologies , Electrical / Electronic , Airbrushing , Radio-Control.

Long before digital effects appeared, animatronics were making cinematic history. The scare generated by the Great White coming out of the water in “Jaws” and the tender otherworldliness of “E.T.” were its outcomes. The Jurassic Park series combined digital effects with animatronics.

It is possible for us to build our own animatronics by making use of ready-made animatronic kits provided by companies such as Mister Computers.

Airborne Internet

Added on: February 6th, 2012 by Afsal Meerankutty 2 Comments

The Airborne Internet is network in which all nodes would be located in aircraft. The network is intended for use in aviation communications, navigation, and surveillance (CNS) and would also be useful to businesses, private Internet users, and military. In time of war, for example, an airborne network might enable military planes to operate without the need for a communications infrastructure on the ground. Such a network could also allow civilian planes to continually monitor each other’s positions and flight paths.

Airborne Internet is network will serve tens of thousands of subscribers within a super-metropolitan area, by offering ubiquitous access throughout the networkâ„¢s signal “footprint”. The aircrafts will carry the “hub” of a wireless network having a star topology. The aircrafts will fly in shifts to provide continuous service, 24 hour per day by 7 days per week, with an overall system reliability of 99.9% or greater. At least three different methods have been proposed for putting communication nodes aloft. The first method would employ manned aircraft, the second method would use unmanned aircraft, and the third method would use blimps. The nodes would provide air-to-air, surface-to-air, and surface-to-surface communications. The aircraft or blimps would fly at altitudes of around 16 km, and would cover regions of about 40 mi (64 mi) in radius. Any subscriber within this region will be able to access the networkâ„¢s ubiquitous multi-gigabit per second “bit cloud” upon demand. what the airborne internet will do is provide an infrastructure that can reach areas that don’t have broadband cables & wires. Data transfer rates would be on the order of several gigagabits per second, comparable to those of high-speed cable modem connections. Network users could communicate directly with other users, and indirectly with conventional Internet users through surface-based nodes.

Like the Internet, the Airborne Network would use TCP/IP as the set of protocols for specifying network addresses and ensuring message packets arrive. This technology is also called High Altitude Long Operation (HALO) The concept of the Airborne Internet was first proposed at NASA Langley Research Center’s Small Aircraft Transportation System (SATS) Planning Conference in 1999.

Support us!

If you like this site please click on any of these buttons!