Hello Guest. Sign Up to view and download full seminar reports               

SEMINAR TOPICS CATEGORY

Electronics Topics Category

TigerSharc Processor

Added on: February 25th, 2020 by Afsal Meerankutty No Comments

In the past three years several multiple data path and pipelined digital signal processors have been introduced into the marketplace. This new generation of DSP’s takes advantage of higher levels of integrations than were available for their predecessors. The Tiger SHARC processor is the newest and most power member of this family which incorporates many mechanisms like SIMD, VLIW and short vector memory access in a single processor. This is the first time that all these techniques have been combined in a real time processor.

              The TigerSHARC DSP is an ultra high-performance static superscalar architecture that is optimized for tele-communications infrastructure and other computationally demanding applications. This unique architecture combines elements of RISC, VLIW, and standard DSP processors to provide native support for 8, 16, and 32-bit fixed, as well as floating-point data types on a single chip.

               Large on-chip memory, extremely high internal and external bandwidths and dual compute blocks provide the necessary capabilities to handle a vast array of computationally demanding, large signal processing tasks.

Solar Sail

Added on: February 24th, 2020 by Webmaster No Comments

Hundreds of space missions have been launched since the last lunar mission, including several deep space probes that have been sent to the edges of our solar system. However, our journeys to space have been limited by the power of chemical rocket engines?and the amount of rocket fuel that a spacecraft can carry. Today, the weight of a space shuttle at launch is approximately 95 percent fuel. What could we accomplish if we could reduce our need for so much fuel and the tanks that hold it?
International space agencies and some private corporations have proposed many methods of transportation that would allow us to go farther, but a manned space mission has yet to go beyond the moon. The most realistic of these space transportation options calls for the elimination of both rocket fuel and rocket engines — replacing them with sails. Yes, that’s right, sails.
Solar-sail mission analysis and design is currently performed assuming constant optical and mechanical properties of the thin metalized polymer films that are projected for solar sails. More realistically, however, these properties are likely to be affected by the damaging effects of the space environment. The standard solar-sail force models can therefore not be used to investigate the consequences of these effects on mission performance. The aim of this paper is to propose a new parametric model for describing the sail film’s optical degradation with time. In particular, the sail film’s optical coefficients are assumed to depend on its environmental history, that is, the radiation dose. Using the proposed model, the optimal control laws for degrading solar sails are derived using an indirect method and the effects of different degradation behaviors are investigated for an example interplanetary mission.

Tele Immersion

Added on: February 24th, 2020 by Afsal Meerankutty No Comments

Tele-immersion may be the next major development in information technology. Using tele-immersion, you can visit an individual across the world without stepping a foot outside.
Tele-Immersion is a new medium that enables a user to share a virtual space with remote participants. The user is immersed in a 3D world that is transmitted from a remote site. This medium for human interaction, enabled by digital technology, approximates the illusion that a person is in the same physical space as others, even though they may be thousands of miles distant. It combines the display and interaction techniques of virtual reality with new computer-vision technologies. Thus with the aid of this new technology, users at geographically distributed sites can collaborate in real time in a shared, simulated, hybrid environment submerging in one another’s presence and feel as if they are sharing the same physical space.

Plagiarism Detection Of Images

Added on: February 23rd, 2020 by Afsal Meerankutty No Comments

“Plagiarism is defined as presenting someone else’s work as your own. Work means any intellectual output, and typically includes text, data, images, sound or performance.”Plagiarism is the unacknowledged and inappropriate use of the ideas or wording of another writer. Because plagiarism corrupts values in which the university community is fundamentally committed – the pursuit of knowledge, intellectual honesty – plagiarism is considered a grave violation of academic integrity and the sanctions against it are correspondingly severe. Plagiarism can be characterized as “academic theft.”
                          CBIR or Content Based Image Retrieval is the retrieval of images based on visual features such as colour, texture and shape. Reasons for the development of CBIR systems is that in many large image databases, traditional methods of image indexing have proven to be insufficient, laborious, and extremely time consuming. These old methods of image indexing, ranging from storing an image.
                          In the database and associating it with a keyword or number, to associate it with a categorized description, has become obsolete. In CBIR, each image that is stored in the database has its features extracted and compared to the features of the query.
                          Feature (content) extraction is the basis of content based Image Retrieval. In broad sense, features may include both text based features (keywords, annotations, etc) and visual features (colour, texture, shape, faces, etc). Within the visual feature scope, the features can be further classified as general features and domain specific features. The former include colour, texture and shape features while the latter is application dependent and may include, for example, human faces and finger prints.

Telepresence

Added on: February 23rd, 2020 by Afsal Meerankutty No Comments

Telepresence refers to a set of technologies which allow a person to feel as if they were present, to give the appearance that they were present, or to have an effect, at a location other than their true location.Telepresence requires that the senses of the user, or users, are provided with such stimuli as to give the feeling of being in that other location. Additionally, the user(s) may be given the ability to affect the remote location. In this case, the user’s position, movements, actions, voice, etc. may be sensed, transmitted and duplicated in the remote location to bring about this effect. Therefore information may be travelling in both directions between the user and the remote location.                           TelePresence is a new technology that creates unique, “in-person” experiences between people, places, and events in their work and personal lives. It combines innovative video, audio, and interactive elements (both hardware and software) to create this experience over the network. Telepresence means “feeling like you are somewhere else”. Some people have a very technical interpretation of this, where they insist that you must have head-mounted displays in order to have telepresence. Other people have a task-specific meaning, where “presence” requires feeling that you are emotionally and socially connected with the remote world. It’s all a little vague at this time.

Positron Emission Tomography

Added on: February 19th, 2020 by Afsal Meerankutty No Comments

Positron emission tomography, also called PET imaging or a PET scan, is a type of nuclear medicine imaging. Positron emission tomography (PET) is a nuclear medicine imaging technique which produces a three-dimensional image or map of functional processes in the body. The system detects pairs of gamma rays emitted indirectly by a positron-emitting radioisotope, which is introduced into the body on a metabolically active molecule. Images of metabolic activity in space are then reconstructed by computer analysis, often in modern scanners aided by results from a CT X-ray scan performed on the patient at the same time, in the same machine.
information and accurate diagnoses.
A PET scan measures important body functions, such as blood flow, oxygen use, and sugar (glucose) metabolism, to help doctors evaluate how well organs and tissues are functioning.
PET is actually a combination of nuclear medicine and biochemical analysis. Used mostly in patients with brain or heart conditions and cancer, PET helps to visualize the biochemical changes taking place in the body, such as the metabolism (the process by which cells change food into energy after food is digested and absorbed into the blood) of the heart muscle.
PET differs from other nuclear medicine examinations in that PET detects metabolism within body tissues, whereas other types of nuclear medicine examinations detect the amount of a radioactive substance collected in body tissue in a certain location to examine the tissue’s function.

Speech Processing

Added on: February 18th, 2020 by Afsal Meerankutty No Comments

Speech processing is the study of speech signals and the processing methods of these signals.The signals are usually processed in a digital representation whereby speech processing can be seen as the intersection of digital signal processing and natural language processing.

  • Speech recognition, which deals with analysis of the linguistic content of a speech signal.
  • Speaker recognition, where the aim is to recognize the identity of the speaker.
  • Enhancement of speech signals, e.g. audio noise reduction,
  • Speech coding, a specialized form of data compression, is important in the telecommunication area.
  • Voice analysis for medical purposes, such as analysis of vocal loading and dysfunction of the vocal cords.
  • Speech synthesis: the artificial synthesis of speech, which usually means computer generated speech.
  • Speech enhancement: enhancing the perceptual quality of speech signal by removing the destructive effects of noise, limited capacity recording equipment, impairments, etc.

On-road Charging of Electric Vehicles

Added on: February 15th, 2020 by Afsal Meerankutty 1 Comment

This seminar topic report delves into the concept of Contactless Power Transfer (CPT) systems and their potential application for charging electric vehicles (EVs) without requiring any physical interconnection. The focus of the investigation centers on the feasibility of implementing on-road charging systems to extend the driving range of EVs and reduce the size of their batteries. The paper examines critical aspects such as the necessary road coverage and power transfer capability of the CPT system.

One of the primary objectives of this study is to explore how on-road charging can positively impact EVs by offering continuous charging while they are in motion. By seamlessly charging the EVs while driving, it becomes possible to extend their range and mitigate range anxiety, a crucial concern for many potential EV owners. Moreover, with reduced battery size requirements, the overall weight and cost of EVs could potentially be lowered, making them more accessible and efficient.

To achieve these benefits, the paper addresses essential design considerations concerning the distribution and length of CPT segments across the road. Determining the optimal arrangement of these charging segments is critical to ensure efficient and reliable charging for vehicles on the move. Additionally, understanding the power transfer capability of the CPT system is essential to match the charging requirements of various EV models and ensure compatibility.

A significant aspect of this study is to assess the total power demand generated by all passing vehicles using the on-road charging system. Understanding the overall power consumption is essential to gauge the system’s scalability and the potential burden it may place on the electrical grid. This assessment can also shed light on the feasibility of powering EVs directly from renewable energy sources, which aligns with the broader sustainability goals of the transportation sector.

By exploring the possibility of integrating EVs with renewable energy sources, the paper seeks to contribute to the ongoing efforts towards a cleaner and greener future for transportation. If successful, on-road charging systems could significantly reduce the carbon footprint of EVs and promote their widespread adoption, thus making substantial progress towards a more sustainable mobility landscape.

In conclusion, this seminar topic report presents a comprehensive investigation into the on-road charging of electric vehicles using Contactless Power Transfer systems. By addressing key aspects such as road coverage, power transfer capability, design considerations, and potential reliance on renewable energy, the study aims to shed light on the promising opportunities and challenges in this domain. The outcomes of this research have the potential to reshape the EV charging infrastructure and accelerate the transition to an emission-free transportation era.

Navbelt and Guidecane

Added on: January 27th, 2020 by Afsal Meerankutty No Comments

 Recent revolutionary achievements in robotics and bioengineering have given scientists and engineers great opportunities and challenges to serve humanity. This seminar is about “NAVBELT AND GUIDECANE”, which are two computerised devices based on advanced mobile robotic navigation for obstacle avoidance useful for visually impaired people. This is “Bioengineering for people with disabilities”. NavBelt is worn by the user like a belt and is equipped with an array of ultrasonic sensors. It provides acoustic signals via a set of stereo earphones that guide the user around obstacles or displace a virtual acoustic panoramic image of the traveller’s surroundings. One limitation of the NavBelt is that it is exceedingly difficult for the user to comprehend the guidance signals in time, to allow fast work. A newer device, called GuideCane, effectively overcomes this problem. The GuideCane uses the same mobile robotics technology as the NavBelt but is a wheeled device pushed ahead of the user via an attached cane. When the Guide Cane detects an obstacle, it steers around it. The user immediately feels this steering action and can follow the Guide Cane’s new path easily without any conscious effort. The mechanical, electrical and software components, user machine interface and the prototypes of the two devices are described below.

Artificial Passenger

Added on: March 6th, 2017 by Afsal Meerankutty No Comments

The AP is an artificial intelligence–based companion that will be resident in software and chips embedded in the automobile dashboard. The heart of the system is a conversation planner that holds a profile of you, including details of your interests and profession.

A microphone picks up your answer and breaks it down into separate words with speech-recognition software. A camera built into the dashboard also tracks your lip movements to improve the accuracy of the speech recognition. A voice analyzer then looks for signs of tiredness by checking to see if the answer matches your profile. Slow responses and a lack of intonation are signs of fatigue.

This research suggests that we can make predictions about various aspects of driver performance based on what we glean from the movements of a driver’s eyes and that a system can eventually be developed to capture this data and use it to alert people when their driving has become significantly impaired by fatigue.

MOCT (Magneto-Optical Current Transformer)

Added on: February 7th, 2017 by Afsal Meerankutty 1 Comment

An accurate current transducer is a key component of any power system instrumentation. To measure currents, power stations and substations conventionally employ inductive type current transformers. With short circuit capabilities of power system getting larger and the voltage level going higher the conventional current transducers becomes more bulky and costly.

It appears that newly emerged MOCT technology provides a solution for many of the problems by the conventional current transformers. MOCT measures the rotation angle of the plane polarized lights caused by the magnetic field and convert it into a signal of few volts proportional to the magnetic field.

Main advantage of an MOCT is that there is no need to break the conductor to enclose the optical path in the current carrying circuit and there is no electromagnetic interference.

Memristors

Added on: January 25th, 2017 by Afsal Meerankutty No Comments

Generally when most people think about electronics, they may initially think of products such as cell phones, radios, laptop computers, etc. others, having some engineering background, may think of resistors, capacitors, etc. which are the basic components necessary for electronics to function. Such basic components are fairly limited in number and each having their own characteristic function.

Memristor theory was formulated and named by Leon Chua in a 1971 paper. Chua strongly believed that a fourth device existed to provide conceptual symmetry with the resistor, inductor, and capacitor. This symmetry follows from the description of basic passive circuit elements as defined by a relation between two of the four fundamental circuit variables. A device linking charge and flux (themselves defined as time integrals of current and voltage), which would be the memristor, was still hypothetical at the time. However, it would not be until thirty-seven years later, on April 30, 2008, that a team at HP Labs led by the scientist R. Stanley Williams would announce the discovery of a switching memristor. Based on a thin film of titanium dioxide, it has been presented as an approximately ideal device.

The reason that the memristor is radically different from the other fundamental circuit elements is that, unlike them, it carries a memory of its past. When you turn off the voltage to the circuit, the memristor still remembers how much was applied before and for how long. That’s an effect that can’t be duplicated by any circuit combination of resistors, capacitors, and inductors, which is why the memristor qualifies as a fundamental circuit element.

The arrangement of these few fundamental circuit components form the basis of almost all of the electronic devices we use in our everyday life. Thus the discovery of a brand new fundamental circuit element is something not to be taken lightly and has the potential to open the door to a brand new type of electronics. HP already has plans to implement memristors in a new type of non-volatile memory which could eventually replace flash and other memory systems

Pill Camera

Added on: January 12th, 2017 by Afsal Meerankutty No Comments

The aim of technology is to make products in a large scale for cheaper prices and increased quality. The current technologies have attained a part of it, but the manufacturing technology is at macro level. The future lies in manufacturing product right from the molecular level. Research in this direction started way back in eighties. At that time manufacturing at molecular and atomic level was laughed about. But due to advent of nanotechnology we have realized it to a certain level. One such product manufactured is PILL CAMERA, which is used for the treatment of cancer, ulcer and anemia. It has made revolution in the field of medicine. This tiny capsule can pass through our body, without causing any harm.

It takes pictures of our intestine and transmits the same to the receiver of the Computer analysis of our digestive system. This process can help in tracking any kind of disease related to digestive system. Also we have discussed the drawbacks of PILL CAMERA and how these drawbacks can be overcome using Grain sized motor and bi-directional wireless telemetry capsule .Besides this we have reviewed the process of manufacturing products using nanotechnology.Some other important applications are also discussed along with their potential impacts on various fields.

We have made great progress in manufacturing products. Looking back from where we stand now, we started from flint knives and stone tools and reached the stage where we make such tools with more precision than ever. The leap in technology is great but it is not going to stop here. With our present technology we manufacture products by casting, milling, grinding, chipping and the likes. With these technologies we have made more things at a lower cost and greater precision than ever before. In the manufacture of these products we have been arranging atoms in great thundering statistical herds. All of us know manufactured products are made from atoms. The properties of those products depend on how those atoms are arranged. If we rearrange atoms in dirt, water and air we get grass. The next step in manufacturing technology is to manufacture products at molecular level. The technology used to achieve manufacturing at molecular level is “NANOTECHNOLOGY”. Nanotechnology is the creation of useful materials, devices and system through manipulation of such miniscule matter (nanometer).Nanotechnology deals with objects measured in nanometers. Nanometer can be visualized as billionth of a meter or millionth of a millimeter or it is 1/80000 width of human hair.

BrainGate

Added on: December 26th, 2016 by Afsal Meerankutty No Comments

BrainGate is a brain implant system developed by the bio-tech company Cyberkinetics in 2003 in conjunction with the Department of Neuroscience at Brown University. The device was designed to help those who have lost control of their limbs, or other bodily functions, such as patients with amyotrophic lateral sclerosis (ALS) or spinal cord injury. The computer chip, which is implanted into the brain, monitors brain activity in the patient and converts the intention of the user into computer commands. Cyberkinetics describes that “such applications may include novel communications interfaces for motor impaired patients, as well as the monitoring and treatment of certain diseases which manifest themselves in patterns of brain activity, such as epilepsy and depression.”

The Braingate Neural Interface device consists of a tiny chip containing 100 microscopic electrodes that is surgically implanted in the brain’s motor cortex. The whole apparatus is the size of a baby aspirin. The chip can read signals from the motor cortex, send that information to a computer via connected wires, and translate it to control the movement of a computer cursor or a robotic arm. According to Dr. John Donaghue of Cyberkinetics, there is practically no training required to use BrainGate because the signals read by a chip implanted, for example, in the area of the motor cortex for arm movement, are the same signals that would be sent to the real arm. A user with an implanted chip can immediately begin to move a cursor with thought alone.

The BrainGate technology platform was designed to take advantage of the fact that many patients with motor impairment have an intact brain that can produce movement commands. This may allow the BrainGate system to create an output signal directly from the brain, bypassing the route through the nerves to the muscles that cannot be used in paralysed people.

DakNet

Added on: November 3rd, 2013 by Afsal Meerankutty 10 Comments

DakNet provides extraordinarily low-cost digital communication, letting remote villages leapfrog past the expense of traditional connectivity solutions and begin development of a full coverage broadband wireless infrastructure. DakNet, an ad hoc network that uses wireless technology to provide asynchronous digital connectivity, is evidence that the marriage of wireless and asynchronous service may indeed be the beginning of a road to universal broadband connectivity.
This paper briefly explains about what are DakNet, how wireless technology implemented with DakNet, its fundamental operations and its applications, cost estimation, advantages and disadvantages and finally how to connect Indian villages with town city and global markets.

BiCMOS Technology

Added on: November 3rd, 2013 by Afsal Meerankutty No Comments

The need for high-performance, low-power, and low-cost systems for network transport and wireless communications is driving silicon technology toward higher speed, higher integration, and more functionality. Further more, this integration of RF and analog mixed-signal circuits into high-performance digital signal-processing (DSP) systems must be done with minimum cost overhead to be commercially viable. While some analog and RF designs have been attempted in mainstream digital-only complimentary metal-oxide semiconductor (CMOS) technologies, almost all designs that require stringent RF performance use bipolar or semiconductor technology. Silicon integrated circuit (IC) products that, at present, require modern bipolar or BiCMOS silicon technology in wired application space include the essential optical network (SONET) and synchronous digital hierarchy (SDH) operating at 10 Gb/s and higher.

The viability of a mixed digital/analog. RF chip depends on the cost of making the silicon with the required elements; in practice, it must approximate the cost of the CMOS wafer, Cycle times for processing the wafer should not significantly exceed cycle times for a digital CMOS wafer. Yields of the SOC chip must be similar to those of a multi-chip implementation. Much of this article will examine process techniques that achieve the objectives of low cost, rapid cycle time, and solid yield.

Space Time Adaptive Processing

Added on: October 31st, 2013 by Afsal Meerankutty 3 Comments

Space-time adaptive processing (STAP) is a signal processing technique most commonly used in radar systems. It involves adaptive array processing algorithms to aid in target detection. Radar signal processing benefits from STAP in areas where interference is a problem (i.e. ground clutter, jamming, etc.). Through careful application of STAP, it is possible to achieve order-of-magnitude sensitivity improvements in target detection.
STAP involves a two-dimensional filtering technique using a phased-array antenna with multiple spatial channels. Coupling multiple spatial channels with pulse-Doppler waveforms lends to the name “space-time.” Applying the statistics of the interference environment, an adaptive STAP weight vector is formed. This weight vector is applied to the coherent samples received by the radar.
In a ground moving target indicator (GMTI) system, an airborne radar collects the returned echo from the moving target on the ground. However, the received signal contains not only the reflected echo from the target, but also the returns from the illuminated ground surface. The return from the ground is generally referred to as clutter.
The clutter return comes from all the areas illuminated by the radar beam, so it occupies all range bins and all directions. The total clutter return is often much stronger than the returned signal echo, which poses a great challenge to target detection. Clutter filtering, therefore, is a critical part of a GMTI system.

Biosensors

Added on: October 31st, 2013 by Afsal Meerankutty 2 Comments

A biosensor is a device for the detection of an analytic that combines a biological component with a physicochemical detector component. Many optical biosensors based on the phenomenon of surface plasmon resonance are evanescent wave techniques . The most widespread example of a commercial biosensor is the blood glucose biosensor, which uses the enzyme glucose oxidase to break blood glucose down.
Bio sensors are the combination of bio receptor and transducer. The bio receptor is a biomolecule that identifies the target whereas transducer converts the identified target into the measurable signal. Biosensors are used in the market in many diverse areas. They are also used in the clinical test in one of the biggest diagnostic market of 4000 million in US$.
They are very useful to measure the specific thing with great accuracy. Its speed can be directly measured. They are very simple. Receptors and transducer are integrated into single sensors without using reagents.

Haptic Systems

Added on: October 10th, 2013 by 5 Comments

‘Haptics’ is a technology that adds the sense of touch to virtual environments. Users are given the illusion that they are touching or manipulating a real physical object.
This seminar discusses the important concepts in haptics, some of the most commonly used haptics systems like ‘Phantom’, ‘Cyberglove’, ‘Novint Falcon’ and such similar devices. Following this, a description about how sensors and actuators are used for tracking the position and movement of the haptic systems, is provided.
The different types of force rendering algorithms are discussed next. The seminar explains the blocks in force rendering. Then a few applications of haptic systems are taken up for discussion.

Black Box

Added on: October 8th, 2013 by 5 Comments

As the technology progressing, the speed of traveling is also increased. The source to destination became so closer to each others. The main advancement in the field of the air traveling system with the help of airplane. This is the major discovery of technology. But as the speed increases , the horror of air crash also introduced. Because at a height of 2000m and above if a plane crashes ,it will be a terror for any body. So to take the feed back of the various activities happens in the plane and record them engineers need a mechanism to record such activities .
With any airplane crash, there are many unanswered questions as to what brought the plane down. Investigators turn to the airplane’s flight data recorder (FDR) and cockpit voice recorder (CVR), also known as “black boxes,” for answers. In Flight 261, the FDR contained 48 parameters of flight data, and the CVR recorded a little more than 30 minutes of conversation and other audible cockpit noises.

Underwater Communication Systems

Added on: October 3rd, 2013 by 7 Comments

There is a high demand for underwater communication systems due to the increase in current human underwater activities. Underwater communication systems employ either sonar or electromagnetic waves as a means of transferring signals. These waves are different physically and electrically, and thus the systems that employ them also differ in their design architecture, wave propagation and devices used for emission and reception. As a result, the two systems have varying advantages and limitations. This paper presents an in-depth review of underwater communication based on sonar and electromagnetic waves, a comparison of the two systems and a discussion of the environmental impacts of using these waves for underwater communication. In the tradeoff between preserving the underwater environment and the need for underwater communication, it appears that underwater electromagnetic wave communication has the most potential to be the environmentally-friendly system of the future.

Sonar and Acoustic Waves Communication

Added on: October 3rd, 2013 by 1 Comment

In past previous years, the demand of underwater communication increases due to interest and underwater activities of human being. Underwater communication done with the help of sonar waves, electromagnetic waves and acoustic waves, these three waves are different in nature. This paper present an overview of sonar and acoustic waves underwater communication .In this it is also show acoustic wave communication is better than sonar wave communication. Addition with this the factors, which affect the acoustic wave communication, also explained.

Optical Fibers

Added on: March 25th, 2012 by 1 Comment

An optical fiber (or fibre) is a glass or plastic fiber that carries light along its length. Fiber optics is the overlap of applied science and engineering concerned with the design and application of optical fibers. Optical fibers are widely used in fiber-optic communications, which permits transmission over longer distances and at higher bandwidths (data rates) than other forms of communications. Fibers are used instead of metal wires because signals travel along them with less loss, and they are also immune to electromagnetic interference. Fibers are also used for illumination, and are wrapped in bundles so they can be used to carry images, thus allowing viewing in tight spaces. Specially designed fibers are used for a variety of other applications, including sensors and fiber lasers.

Light is kept in the core of the optical fiber by total internal reflection. This causes the fiber to act as a waveguide. Fibers which support many propagation paths or transverse modes are called multi-mode fibers (MMF), while those which can only support a single mode are called single-mode fibers (SMF). Multi-mode fibers generally have a larger core diameter, and are used for short-distance communication links and for applications where high power must be transmitted. Single-mode fibers are used for most communication links longer than 550 meters (1,800 ft).

Optical Fiber Communication System

Added on: March 25th, 2012 by No Comments

Communication is an important part of our daily life. The communication process involves information generation, transmission, reception and interpretation. As needs for various types of communication such as voice, images, video and data communications increase demands for large transmission capacity also increase. This need for large capacity has driven the rapid development of light wave technology; a worldwide industry has developed. An optical or light wave communication system is a system that uses light waves as the carrier for transmission. An optical communication system mainly involves three parts. Transmitter, receiver and channel. In optical communication transmitters are light sources, receivers are light detectors and the channels are optical fibers. In optical communication the channel i.e, optical fibers play an important role because it carries the data from transmitter to the receiver. Hence, here we shall discuss mainly about optical fibers.

Solar Power Satellites

Added on: March 22nd, 2012 by 5 Comments

The new millennium has introduced increased pressure for finding new renewable energy sources. The exponential increase in population has led to the global crisis such as global warming, environmental pollution and change and rapid decrease of fossil reservoirs. Also the demand of electric power increases at a much higher pace than other energy demands as the world is industrialized and computerized. Under these circumstances, research has been carried out to look into the possibility of building a power station in space to transmit electricity to Earth by way of radio waves-the Solar Power Satellites. Solar Power Satellites(SPS) converts solar energy in to micro waves and sends that microwaves in to a beam to a receiving antenna on the Earth for conversion to ordinary electricity.SPS is a clean, large-scale, stable electric power source. Solar Power Satellites is known by a variety of other names such as Satellite Power System, Space Power Station, Space Power System, Solar Power Station, Space Solar Power Station etc. One of the key technologies needed to enable the future feasibility of SPS is that of Microwave Wireless Power Transmission.WPT is based on the energy transfer capacity of microwave beam i.e; energy can be transmitted by a well focused microwave beam. Advances in Phased array antennas and rectennas have provided the building blocks for a realizable WPT system.

Tri-Gate Transistors

Added on: March 18th, 2012 by No Comments

Tri-Gate transistors, the first to be truly three-dimensional, mark a major revolution in the Semiconductor industry. The semiconductor industry continues to push technological innovation to keep pace with Moore’s Law, shrinking transistors so that ever more can be packed on a chip. However, at future technology nodes, the ability to shrink transistors becomes more and more problematic, in part due to worsening short channel effects and an increase in parasitic leakages with scaling of the gate-length dimension. In this regard Tri-gate transistor architecture makes it possible to continue Moore’s law at 22nm and below without a major transistor redesign. The physics, technology and the advantages of the device is briefly discussed in this paper.

Electromagnetic Transducer

Added on: March 17th, 2012 by No Comments

This paper describes a novel electromagnetic
transducer called the Four Quadrant Transducer (4QT) for
hybrid electric vehicles. The system consists of one electrical
machine unit (including two rotors) and two inverters, which
enable the vehicle’s Internal Combustion Engine (ICE) to run at
its optimum working points regarding efficiency, almost
independently of the changing load requirements at the wheels. In
other words the ICE is operated at high torque and low speed as
much as possible. As a consequence, reduced fuel consumption
will be achieved.

The basic structure of the Four Quadrant Transducer system,
simulation results and ideas about suitable topologies for
designing a compact machine unit are reported. The simulated
system of a passenger car is equipped with a single step gearbox
making it simple and cost effective. Since the engine is not
mechanically connected to the wheels and the electrical
components have lower power ratings than the engine itself, the
system takes advantage of the best characteristics of the series-
and the parallel hybrid, respectively. The proposed concept looks
promising and fuel savings of more than 40% compared with a
conventional vehicle can be achieved.

Threats of HEMP and HPM Devices

Added on: March 16th, 2012 by No Comments

Electromagnetic Pulse (EMP) is an instantaneous, intense energy field that can overload or disrupt at a distance numerous electrical systems and high technology microcircuits, which are especially sensitive to power surges. A large scale EMP effect can be produced by a single nuclear explosion detonated high in the atmosphere. This method is referred to as High-Altitude EMP (HEMP). A similar, smaller-scale EMP effect can be created using non-nuclear devices with powerful batteries or reactive chemicals. This method is called High Power Microwave (HPM). Several nations, including reported sponsors of terrorism, may currently have a capability to use EMP as a weapon for cyber warfare or cyber terrorism to disrupt communications and other parts of the U.S. critical infrastructure. Also, some equipment and weapons used by the U.S. military may be vulnerable to the effects of EMP.

The threat of an EMP attack against the United States is hard to assess, but some observers indicate that it is growing along with worldwide access to newer technologies and the proliferation of nuclear weapons. In the past, the threat of mutually assured destruction provided a lasting deterrent against the exchange of multiple high-yield nuclear warheads. However, now even a single, specially- designed low-yield nuclear explosion high above the United States, or over a battlefield, can produce a large-scale EMP effect that could result in a widespread loss of electronics, but no direct fatalities, and may not necessarily evoke a large nuclear retaliatory strike by the U.S. military. This, coupled with the possible vulnerability of U.S. commercial electronics and U.S. military battlefield equipment to the effects of EMP, may create a new incentive for other countries to develop or acquire a nuclear capability.

Policy issues raised by this threat include (1) what is the United States doing to protect civilian critical infrastructure systems against the threat of EMP, (2) how does the vulnerability of U.S. civilian and military electronics to EMP attack encourage other nations to develop or acquire nuclear weapons, and (3) how likely are terrorist organizations to launch a smaller-scale EMP attack against the United States?

Electronic Fuel Injection System

Added on: March 15th, 2012 by 1 Comment

In developed and developing countries considerable emphasis is being laid on the minimization of pollutants from internal combustion engines. A two-stroke cycle engine produces a considerable amount of pollutants when gasoline is used as a fuel due to short-circuiting. These pollutants, which include unburnt hydrocarbons and carbon monoxide, which are harmful to beings. There is a strong need to develop a kind of new technology which could minimize pollution from these engines.

Direct fuel injection has been demonstrated to significantly reduce unburned hydrocarbon emissions by timing the injection of fuel in such way as to prevent the escape of unburned fuel from the exhaust port during the scavenging process.

The increased use of petroleum fuels by automobiles has not only caused fuel scarcities, price hikes, higher import bills, and economic imbalance but also causes health hazards due to its toxic emissions. Conventional fuels used in automobiles emit toxic pollutants, which cause asthma, chronic cough, skin degradation, breathlessness, eye and throat problems, and even cancer.

In recent years, environmental improvement (CO2, NOx and Ozone reduction) and energy issues have become more and more important in worldwide concerns. Natural gas is a good alternative fuel to improve these problems because of its abundant availability and clean burning characteristics.

Gi-Fi

Added on: March 15th, 2012 by 6 Comments

Gi-Fi or Gigabit Wireless is the world’s first transceiver integrated on a single chip that operates at 60GHz on the CMOS process. It will allow wireless transfer of audio and video data up to 5 gigabits per second, ten times the current maximum wireless transfer rate, at one-tenth of the cost, usually within a range of 10 meters. It utilizes a 5mm square chip and a 1mm wide antenna burning less than 2m watts of power to transmit data wirelessly over short distance, much like Bluetooth.

Gi-Fi will helps to push wireless communications to faster drive. For many years cables ruled the world. Optical fibers played a dominant role for its higher bit rates and faster transmission. But the installation of cables caused a greater difficulty and thus led to wireless access. The foremost of this is Bluetooth which can cover 9-10mts. Wi-Fi followed it having coverage area of 91mts. No doubt, introduction of Wi-Fi wireless networks has proved a revolutionary solution to “last mile” problem. However, the standard’s original limitations for data exchange rate and range, number of changes, high cost of the infrastructure have not yet made it possible for Wi-Fi to become a total threat to cellular networks on the one hand, and hard-wire networks, on the other. But the man’s continuous quest for even better technology despite the substantial advantages of present technologies led to the introduction of new, more up-to-date standards for data exchange rate i.e., Gi-Fi.

The development will enable the truly wireless office and home of the future. As the integrated transceiver is extremely small, it can be embedded into devices. The breakthrough will mean the networking of office and home equipment without wires will finally become a reality.
In this book we present a low cost, low power and high broadband chip, which will be vital in enabling the digital economy of the future.

Tele-Graffiti

Added on: March 15th, 2012 by No Comments

There are several ways of building a remote sketching system. One way is to use a tablet and a stylus to input the sketch, and a computer monitor to display the sketch at the remote site. Such systems have a number of disadvantages. Writing with a stylus on a glass tablet is unnatural compared to sketching with a regular pen and paper. Shading and other effects are harder to achieve. Changing color means using the computer to select a new color. Incorporating existing hard-copy documents such as a graded exam is impossible.

Another way of building a remote sketching system is to use a video camera to image the sketch at one end, transmit the captured video to the other end, and displays it there using an LCD projector. See Figure 1 for a schematic diagram of how such a system might operate. The first such camera-projector based remote sketching system was Pierre Wellner’s Xerox “Double Digital Desk”

ARM Processor

Added on: March 14th, 2012 by 1 Comment

An ARM processor is any of several 32-bit RISC (reduced instruction set computer) microprocessor s developed by Advanced RISC Machines, Ltd. The ARM architecture was originally conceived by Acorn Computers Ltd. in the 1980s. Since then, it has evolved into a family of microprocessors extensively used in consumer electronic devices such as mobile phone s, multimedia players, pocket calculator s and PDA s (personal digital assistants).
ARM processor features include:

  • Load/store architecture
  • An orthogonal instruction set
  • Mostly single-cycle execution
  • A 16×32-bit register
  • Enhanced power-saving design

ARM provides developers with intellectual property (IP) solutions in the form of processors, physical IP, cache and SoC designs, application-specific standard products (ASSPs), related software and development tools — everything you need to create an innovative product design based on industry-standard components that are ‘next generation’ compatible.

IRIS Recognition

Added on: March 12th, 2012 by 1 Comment

Iris recognition is an automated method of capturing a person’s unique biological data that distinguishes him or her from another individual. It has emerged as one of the most powerful and accurate identification techniques in the modern world. It has proven to be most fool proof technique for the identification of individuals without the use of cards, PINs and passwords. It facilitates automatic identification where by electronic transactions or access to places, information or accounts are made easier, quicker and more secure.

A method for rapid visual recognition of personal identity is described, based on the failure of statistical test of independence. The most unique phenotypic feature visible in a person’s face is the detailed texture of each eye’s iris: an estimate of its statistical complexity in a sample of the human population reveals variation corresponding to several hundred independent degrees-of-freedom. Morphogenetic randomness in the texture expressed phenotypically in the iris trabeclar meshwork ensures that a test of statistical independence on two coded patterns organizing from different eyes is passed almost certainly, whereas the same test is failed almost certainly when the compared codes originate from the same eye. The visible texture of a person’s iris in a real time video image is encoded into a compact sequence of multi-scale quadrature 2-D Gabor wavelet coefficients, whose most significant bits comprise a 512 – byte “IRIS–CODE” statistical decision theory generates identification decisions from Exclusive-OR comparisons of complete iris code at the rate of 4,000 per second, including calculation of decision confidence levels. The distributions observed empirically in such comparisons imply a theoretical “cross-over” error rate of one in 1,31,000 when a decision criterion is adopted that would equalize the False Accept and False Reject error rates.

Artificial Eye

Added on: March 3rd, 2012 by No Comments

The retina is a thin layer of neural tissue that lines the back wall inside the eye. Some of these cells act to receive light, while others interpret the information and send messages to the brain through the optic nerve. This is part of the process that enables us to see. In damaged or dysfunctional retina, the photoreceptors stop working, causing blindness. By some estimates, there are more than 10 million people worldwide affected by retinal diseases that lead to loss of vision.

The absence of effective therapeutic remedies for retinitis pigmentosa (RP) and age-related macular degeneration (AMD) has motivated the development of experimental strategies to restore some degree of visual function to affected patients. Because the remaining retinal layers are anatomically spared, several approaches have been designed to artificially activate this residual retina and thereby the visual system.

At present, two general strategies have been pursued. The “Epiretinal” approach involves a semiconductor-based device placed above the retina, close to or in contact with the nerve fiber layer retinal ganglion cells. The information in this approach must be captured by a camera system before transmitting data and energy to the implant. The “Sub retinal” approach involves the electrical stimulation of the inner retina from the sub retinal space by implantation of a semiconductor-based micro photodiode array (MPA) into this location. The concept of the sub retinal approach is that electrical charge generated by the MPA in response to a light stimulus may be used to artificially alter the membrane potential of neurons in the remaining retinal layers in a manner to produce formed images.

Some researchers have developed an implant system where a video camera captures images, a chip processes the images, and an electrode array transmits the images to the brain. It’s called Cortical Implants.

Transparent Electronics

Added on: March 1st, 2012 by No Comments

Transparent electronics is an emerging science and technology field focused on producing ‘invisible’ electronic circuitry and opto-electronic devices. Applications include consumer electronics, new energy sources, and transportation; for example, automobile windshields could transmit visual information to the driver. Glass in almost any setting could also double as an electronic device, possibly improving security systems or offering transparent displays. In a similar vein, windows could be used to produce electrical power. Other civilian and military applications in this research field include realtime wearable displays. As for conventional Si/III–V-based electronics, the basic device structure is based on semiconductor junctions and transistors. However, the device building block materials, the semiconductor, the electric contacts, and the dielectric/passivation layers, must now be transparent in the visible –a true challenge! Therefore, the first scientific goal of this technology must be to discover, understand, and implement transparent high-performance electronic materials. The second goal is their implementation and evaluation in transistor and circuit structures. The third goal relates to achieving application-specific properties since transistor performance and materials property requirements vary, depending on the final product device specifications. Consequently, to enable this revolutionary technology requires bringing together expertise from various pure and applied sciences, including materials science, chemistry, physics, electrical/electronic/circuit engineering, and display science.

Plastic Memory

Added on: March 1st, 2012 by 1 Comment

A conducting plastic has been used to create a new memory technology which has the potential to store a mega bit of data in a millimeter- square device-10 times denser than current magnetic memories. This device is cheap and fast, but cannot be rewritten, so would only be suitable for permanent storage.

The device sandwiches a blob of a conducting polymer called PEDOT and a silicon diode between perpendicular wires.

The key to the new technology was discovered by passing high current through PEDOT (Polyethylenedioxythiophene) which turns it into an insulator, rather like blowing a fuse .The polymer has two possible states- conductor and insulator, that form the one and zero, necessary to store digital data.

However tuning the polymer into an insulator involves a permanent chemical change, meaning the memory can only be written once.

3D Television

Added on: February 28th, 2012 by No Comments

Three-dimensional TV is expected to be the next revolution in the TV history. They implemented a 3D TV prototype system with real-time acquisition transmission, & 3D display of dynamic scenes. They developed a distributed scalable architecture to manage the high computation & bandwidth demands. 3D display shows high-resolution stereoscopic color images for multiple viewpoints without special glasses. This is first real time end-to-end 3D TV system with enough views & resolution to provide a truly immersive 3D experience.Japan plans to make this futuristic television a commercial reality by 2020 as part of abroad national project that will bring together researchers from the government, technology companies and academia. The targeted “virtual reality” television would allow people to view high definition images in 3D from any angle, in addition to being able to touch and smell the objects being projected upwards from a screen to the floor.

Chameleon Chips

Added on: February 26th, 2012 by No Comments

Today’s microprocessors sport a general-purpose design which has its own advantages and disadvantages.

  • Adv: One chip can run a range of programs. That’s why you don’t need separate computers for different jobs, such as crunching spreadsheets or editing digital photos
  • Disadv: For any one application, much of the chip’s circuitry isn’t needed, and the presence of those “wasted” circuits slows things down.

Suppose, instead, that the chip’s circuits could be tailored specifically for the problem at hand–say, computer-aided design–and then rewired, on the fly, when you loaded a tax-preparation program. One set of chips, little bigger than a credit card, could do almost anything, even changing into a wireless phone. The market for such versatile marvels would be huge, and would translate into lower costs for users.

So computer scientists are hatching a novel concept that could increase number-crunching power–and trim costs as well. Call it the chameleon chip.

Chameleon chips would be an extension of what can already be done with field-programmable gate arrays (FPGAS).

An FPGA is covered with a grid of wires. At each crossover, there’s a switch that can be semipermanently opened or closed by sending it a special signal. Usually the chip must first be inserted in a little box that sends the programming signals. But now, labs in Europe, Japan, and the U.S. are developing techniques to rewire FPGA-like chips anytime–and even software that can map out circuitry that’s optimized for specific problems.

The chips still won’t change colors. But they may well color the way we use computers in years to come. it is a fusion between custom integrated circuits and programmable logic.in the case when we are doing highly performance oriented tasks custom chips that do one or two things spectacularly rather than lot of things averagely is used. Now using field programmed chips we have chips that can be rewired in an instant. Thus the benefits of customization can be brought to the mass market.

A reconfigurable processor is a microprocessor with erasable hardware that can rewire itself dynamically. This allows the chip to adapt effectively to the programming tasks demanded by the particular software they are interfacing with at any given time. Ideally, the reconfigurable processor can transform itself from a video chip to a central processing unit (cpu) to a graphics chip, for example, all optimized to allow applications to run at the highest possible speed. The new chips can be called a “chip on demand.” In practical terms, this ability can translate to immense flexibility in terms of device functions. For example, a single device could serve as both a camera and a tape recorder (among numerous other possibilities): you would simply download the desired software and the processor would reconfigure itself to optimize performance for that function.

Reconfigurable processors, competing in the market with traditional hard-wired chips and several types of programmable microprocessors. Programmable chips have been in existence for over ten years. Digital signal processors (DSPs), for example, are high-performance programmable chips used in cell phones, automobiles, and various types of music players.

Another version, programmable logic chips are equipped with arrays of memory cells that can be programmed to perform hardware functions using software tools. These are more flexible than the specialized DSP chips but also slower and more expensive. Hard-wired chips are the oldest, cheapest, and fastest – but also the least flexible – of all the options.

iDEN

Added on: February 25th, 2012 by No Comments

iDEN is a mobile telecommunications technology, developed by Motorola, which provides its users the benefits of a trunked radio and a cellular telephone. iDEN places more users in a given spectral space, compared to analog cellular and two-way radio systems, by using speech compression and time division multiple access TDMA. Notably, iDEN is designed, and licensed, to operate on individual frequencies that may not be contiguous. iDEN operates on 25kHz channels, but only occupies 20 kHz in order to provide interference protection via guard bands. By comparison, TDMA Cellular (IS-54 and IS-136) is licensed in blocks of 30 kHz channels, but each emission occupies 40 kHz,and is capable of serving the same number of subscribers per channel as iDEN. iDEN supports either three or six interconnect users (phone users) per channel, and either six or twelve dispatch users (push-to-talk users) per channel. Since there is no Analogue component of iDEN, mechanical duplexing in the handset is unnecessary, so Time Domain Duplexing is used instead, the same way that other digital-only technologies duplex their handsets. Also, like other digital-only technologies, hybrid or cavity duplexing is used at the Base Station (Cellsite).

Memristors

Added on: February 24th, 2012 by No Comments

Generally when most people think about electronics, they may initially think of products such as cell phones, radios, laptop computers, etc. others, having some engineering background, may think of resistors, capacitors, etc. which are the basic components necessary for electronics to function. Such basic components are fairly limited in number and each having their own characteristic function.

Memristor theory was formulated and named by Leon Chua in a 1971 paper. Chua strongly believed that a fourth device existed to provide conceptual symmetry with the resistor, inductor, and capacitor. This symmetry follows from the description of basic passive circuit elements as defined by a relation between two of the four fundamental circuit variables. A device linking charge and flux (themselves defined as time integrals of current and voltage), which would be the memristor, was still hypothetical at the time. However, it would not be until thirty-seven years later, on April 30, 2008, that a team at HP Labs led by the scientist R. Stanley Williams would announce the discovery of a switching memristor. Based on a thin film of titanium dioxide, it has been presented as an approximately ideal device.
The reason that the memristor is radically different from the other fundamental circuit elements is that, unlike them, it carries a memory of its past. When you turn off the voltage to the circuit, the memristor still remembers how much was applied before and for how long. That’s an effect that can’t be duplicated by any circuit combination of resistors, capacitors, and inductors, which is why the memristor qualifies as a fundamental circuit element.
The arrangement of these few fundamental circuit components form the basis of almost all of the electronic devices we use in our everyday life. Thus the discovery of a brand new fundamental circuit element is something not to be taken lightly and has the potential to open the door to a brand new type of electronics. HP already has plans to implement memristors in a new type of non-volatile memory which could eventually replace flash and other memory systems.

Access Premium Seminar Reports: Subscribe Now



Sign Up for comprehensive seminar reports & presentations: DOCX, PDF, PPTs.