Hello Guest. Sign Up to view and download full seminar reports               

SEMINAR TOPICS CATEGORY

Engineering Topics Category

Wavelet Video Processing Technology

Added on: March 2nd, 2012 by 1 Comment

The biggest obstacle to the multimedia revolution is digital obesity. This is the blot that occurs when pictures, sound and video are converted from their natural analog form into computer language for manipulation or transmission. In the present explosion of high quality data, the need to compress it with less distortion of data is the need of the hour. Compression lowers the cost of storage and transmission by packing data into a smaller space.

One of the hottest areas of advanced form of compression is wavelet compression. Wavelet Video Processing Technology offers some alluring features, including high compression ratios and eye pleasing enlargements.

Transparent Electronics

Added on: March 1st, 2012 by No Comments

Transparent electronics is an emerging science and technology field focused on producing ‘invisible’ electronic circuitry and opto-electronic devices. Applications include consumer electronics, new energy sources, and transportation; for example, automobile windshields could transmit visual information to the driver. Glass in almost any setting could also double as an electronic device, possibly improving security systems or offering transparent displays. In a similar vein, windows could be used to produce electrical power. Other civilian and military applications in this research field include realtime wearable displays. As for conventional Si/III–V-based electronics, the basic device structure is based on semiconductor junctions and transistors. However, the device building block materials, the semiconductor, the electric contacts, and the dielectric/passivation layers, must now be transparent in the visible –a true challenge! Therefore, the first scientific goal of this technology must be to discover, understand, and implement transparent high-performance electronic materials. The second goal is their implementation and evaluation in transistor and circuit structures. The third goal relates to achieving application-specific properties since transistor performance and materials property requirements vary, depending on the final product device specifications. Consequently, to enable this revolutionary technology requires bringing together expertise from various pure and applied sciences, including materials science, chemistry, physics, electrical/electronic/circuit engineering, and display science.

Plastic Memory

Added on: March 1st, 2012 by 1 Comment

A conducting plastic has been used to create a new memory technology which has the potential to store a mega bit of data in a millimeter- square device-10 times denser than current magnetic memories. This device is cheap and fast, but cannot be rewritten, so would only be suitable for permanent storage.

The device sandwiches a blob of a conducting polymer called PEDOT and a silicon diode between perpendicular wires.

The key to the new technology was discovered by passing high current through PEDOT (Polyethylenedioxythiophene) which turns it into an insulator, rather like blowing a fuse .The polymer has two possible states- conductor and insulator, that form the one and zero, necessary to store digital data.

However tuning the polymer into an insulator involves a permanent chemical change, meaning the memory can only be written once.

Phishing

Added on: February 28th, 2012 by 1 Comment

In the field of computer security, phishing is the criminally fraudulent process of attempting to acquire sensitive information such as usernames, passwords and credit card details, by masquerading as a trustworthy entity in an electronic attempting to acquire sensitive information such as usernames, passwords and credit card details, by masquerading as a trustworthy entity in an electronic communication. Phishing is a fraudulent e-mail that attempts to get you to divulge personal data that can then be used for illegitimate purposes.

There are many variations on this scheme. It is possible to Phish for other information in additions to usernames and passwords such as credit card numbers, bank account numbers, social security numbers and mothers’ maiden names. Phishing presents direct risks through the use of stolen credentials and indirect risk to institutions that conduct business on line through erosion of customer confidence. The damage caused by phishing ranges from denial of access to e-mail to substantial financial loss.

This report also concerned with anti-phishing techniques. There are several different techniques to combat phishing, including legislation and technology created specifically to protect against phishing. No single technology will completely stop phishing. However a combination of good organization and practice, proper application of current technologies and improvements in security technology has the potential to drastically reduce the prevalence of phishing and the losses suffered from it. Anti-phishing software and computer programs are designed to prevent the occurrence of phishing and trespassing on confidential information. Anti-phishing software is designed to track websites and monitor activity; any suspicious behavior can be automatically reported and even reviewed as a report after a period of time.
This also includes detecting phishing attacks, how to prevent and avoid being scammed, how to react when you suspect or reveal a phishing attack and what you can do to help stop phishers.

3D Television

Added on: February 28th, 2012 by No Comments

Three-dimensional TV is expected to be the next revolution in the TV history. They implemented a 3D TV prototype system with real-time acquisition transmission, & 3D display of dynamic scenes. They developed a distributed scalable architecture to manage the high computation & bandwidth demands. 3D display shows high-resolution stereoscopic color images for multiple viewpoints without special glasses. This is first real time end-to-end 3D TV system with enough views & resolution to provide a truly immersive 3D experience.Japan plans to make this futuristic television a commercial reality by 2020 as part of abroad national project that will bring together researchers from the government, technology companies and academia. The targeted “virtual reality” television would allow people to view high definition images in 3D from any angle, in addition to being able to touch and smell the objects being projected upwards from a screen to the floor.

Electrodynamic Tether

Added on: February 28th, 2012 by No Comments

Electrodynamic (ED) tether is a long conducting wire extended from spacecraft. It has a strong potential for providing propellant less propulsion to spacecraft in low earth orbit. An electrodynamic Tether uses the same principle as electric motor in toys, appliances and computer disk drives. It works as a thruster, because a magnetic field exerts a force on a current carrying wire. The magnetic field is supplied by the earth. By properly controlled the forces generated by this “electrodynamic” tether can be used to pull or push a spacecraft to act as brake or a booster. NASA plans to lasso energy from Earth’s atmosphere with a tether act as part of first demonstration of a propellant-free space propulsion system, potentially leading to a revolutionary space transportation system. Working with Earth’s magnetic field would benefit a number of spacecraft including the International Space Station. Tether propulsion requires no fuel. Is completely reusable and environmentally clean and provides all these features at low cost.

HVAC

Added on: February 28th, 2012 by No Comments

Wireless transmission of electromagnetic radiation (communication signals) has become a popular method of transmitting RF signals such as cordless, wireless and cellular telephone signals, paper signals, two way radio signals, video conferencing signals and LAN signals indoors.

Indoor wireless transmission has the advantage that building in which transmission is taking place does not have to be filled with wires or cables that are equipped to carry a multitude of signals. Wires and signals are costly to install and may require expensive upgrades when their capacity is exceeded or when new technologies require different types of wires and cables than those already installed.

Traditional indoor wireless communication systems transmit and receive signals through the use of a network of transmitters, receivers and antennas that are placed throughout the interior of a building. Devices must be located such that signals must not be lost or signal strength may not get attenuated. Again a change in the existing architecture also affects the wireless transmission. Another challenge related to installation of wireless networks in buildings is the need to predict the RF propagation and coverage in the presence of complex combinations of shapes and materials in the buildings.

In general, the attenuation in buildings is larger than that in free space, requiring more cells and higher power to obtain wider coverage. Despite of all these, placement of antennas, receivers and antennas in an indoor environment is largely a process of trial and error. Hence there is need for a method and a system for efficiently transmitting RF and microwave signals indoors without having to install an extensive system of wires and cables inside the buildings.

This paper suggests an alternative method of distributing electromagnetic signals in buildings by the recognition that every building is equipped with an RF wave guide distribution system, the HVAC ducts. The use of HVAC ducts is also amenable to a systematic design procedure but should be significantly less expensive than other approaches since existing infrastructure is used and RF is distributed more efficiently.

Dense Wavelength Division Multiplexing

Added on: February 28th, 2012 by No Comments

There has always been a technological talent to fulfill the constant need to extent the capacity of communication channel and DWDM (Dense Wavelength Division Multiplexing) has dramatically brought about an explosive enlargement of the capacity of fiber network, solving the problem of increasing traffic demand most economically.

DWDM is a technique that makes possible transmission of multiple discrete wavelengths carrying data rate as high as fiber plant allows over a single fiber unidirectionally or bidirectionally.

It is an advanced type of WDM in which the optical channels are more closely spaced than WDM.

Emission Control Techniques

Added on: February 28th, 2012 by No Comments

The need to control the emissions from automobiles gave rise to the computerization of the automobile. Hydrocarbons, carbon monoxide and oxides of nitrogen are created during the combustion process and are emitted into the atmosphere from the tail pipe. There are also hydrocarbons emitted as a result of vaporization of gasoline and from the crankcase of the automobile. The clean air act of 1977 set limits as to the amount of each of these pollutants that could be emitted from an automobile. The manufacturers answer was the addition of certain pollution control devices and the creation of a self-adjusting engine. 1981 saw the first of these self-adjusting engines. They were called feedback fuel control systems. An oxygen sensor was installed in the exhaust system and would measure the fuel content of the exhaust stream. It then would send a signal to a microprocessor, which would analyze the reading and operate a fuel mixture or air mixture device to create the proper air/fuel ratio. As computer systems progressed, they were able to adjust ignition spark timing as well as operate the other emission controls that were installed on the vehicle. The computer is also capable of monitoring and diagnosing itself. If a fault is seen, the computer will alert the vehicle operator by illuminating a malfunction indicator lamp. The computer will at the same time record the fault in it’s memory, so that a technician can at a later date retrieve that fault in the form of a code which will help them determine the proper repair. Some of the more popular emission control devices installed on the automobile are: EGR valve, Catalytic Converter, Air Pump, PCV Valve, Charcol Canitiser etc.

Like SI engine CI engines are also major source of emission. Several experiments and technologies are developed and a lot of experiments are going on to reduce emission from CI engine. The main constituents causing diesel emission are smoke, soot, oxides of nitrogen, hydrocarbons, carbon monoxides etc. Unlike SI engine, emission produced by carbon monoxide and hydrocarbon in CI engine is small. Inorder to give better engine performance the emission must be reduce to a great extend. The emission can be reduced by using smoke suppressant additives, using particulate traps, SCR (Selective Catalytic Reduction) etc.

3D Password

Added on: February 28th, 2012 by Afsal Meerankutty 3 Comments

Normally the authentication scheme the user undergoes is particularly very lenient or very strict. Throughout the years authentication has been a very interesting approach. With all the means of technology developing, it can be very easy for ‘others’ to fabricate or to steal identity or to hack someone’s password. Therefore many algorithms have come up each with an interesting approach toward calculation of a secret key. The algorithms are such based to pick a random number in the range of 10^6 and therefore the possibilities of the sane number coming is rare.

Users nowadays are provided with major password stereotypes such as textual passwords, biometric scanning, tokens or cards (such as an ATM) etc .Mostly textual passwords follow an encryption algorithm as mentioned above. Biometric scanning is your “natural” signature and Cards or Tokens prove your validity. But some people hate the fact to carry around their cards, some refuse to undergo strong IR exposure to their retinas(Biometric scanning).Mostly textual passwords, nowadays, are kept very simple say a word from the dictionary or their pet names, girlfriends etc. Years back Klein performed such tests and he could crack 10-15 passwords per day. Now with the technology change, fast processors and many tools on the Internet this has become a Child’s Play.

Therefore we present our idea, the 3D passwords which are more customizable and very interesting way of authentication. Now the passwords are based on the fact of Human memory. Generally simple passwords are set so as to quickly recall them. The human memory, in our scheme has to undergo the facts of Recognition, Recalling, Biometrics or Token based authentication. Once implemented and you log in to a secure site, the 3D password GUI opens up. This is an additional textual password which the user can simply put. Once he goes through the first authentication, a 3D virtual room will open on the screen. In our case, let’s say a virtual garage. Now in a day to day garage one will find all sorts of tools, equipments, etc.each of them having unique properties. The user will then interact with these properties accordingly. Each object in the 3D space, can be moved around in an (x,y,z) plane. That’s the moving attribute of each object. This property is common to all the objects in the space. Suppose a user logs in and enters the garage. He sees and picks a screw-driver (initial position in xyz coordinates (5, 5, 5)) and moves it 5 places to his right (in XY plane i.e. (10, 5, 5).That can be identified as an authentication. Only the true user understands and recognizes the object which he has to choose among many. This is the Recall and Recognition part of human memory coming into play. Interestingly, a password can be set as approaching a radio and setting its frequency to number only the user knows. Security can be enhanced by the fact of including Cards and Biometric scanner as input. There can be levels of authentication a user can undergo.

Air Brake System

Added on: February 28th, 2012 by No Comments

Air brake system consists of the following components:

Compressor:
The compressor generates the compressed air for the whole system.

Reservoir:
The compressed air from the compressor is stored in the reservoir.

Unloader Valve:
This maintains pressure in the reservoir at 8bar.When the pressure goes above 8 bar it immediately releases the pressurized air to bring the system to 8-bar pressure.

Air Dryer:
This removes the moisture from the atmospheric air and prevents corrosion of the reservoir.

System Protection Valve:
This valve takes care of the whole system. Air from the compressor is given to various channels only through this valve. This valve operates only at 4-bar pressure and once the system pressure goes below 4-bar valve immediately becomes inactive and applies the parking brake to ensure safety.

Dual Brake Valve:
When the driver applies brakes, depending upon the pedal force this valve releases air from one side to another.

Graduated Hand Control Valve:
This valve takes care of the parking brakes.
Brake Chamber:
The air from the reservoir flows through various valves and finally reaches the brake chamber which activates the S-cam in the brake shoe to apply the brakes in the front

Actuators:
The air from the reservoir flows through various valves and finally reaches the brake chamber, which activates the S-cam in the brake shoe to apply the brakes in the rear.

Application of Shunt Power Filter

Added on: February 28th, 2012 by No Comments

In this paper, the implementation of a shunt active power filter with a small series reactor for a three-phase system is presented. The system consists of multiple non-linear loads, which are a combination of harmonic current sources and harmonic voltage sources, with significant unbalanced components. The filter consists of a three-phase current-controlled voltage source inverter (CC-VSI) with a filter inductance at the ac output and a dc-bus capacitor. The CC-VSI is operated to directly control the ac grid current to be sinusoidal and in phase with the grid voltage. The switching is controlled using ramp time current control, which is based on the concept of zero average current error. The simulation results indicate that the filter along with the series reactor is able to handle predominantly the harmonic voltage sources, as well as the unbalance, so that the grid currents are sinusoidal, in phase with the grid voltages and symmetrical.

Hyper Transport Technology

Added on: February 28th, 2012 by No Comments

Hyper Transport technology is a very fast, low latency, point-to-point link used for inter-connecting integrated circuits on board. Hyper Transport, previously codenamed as Lightning Data Transport (LDT), provides the bandwidth and flexibility critical for today’s networking and computing platforms while retaining the fundamental programming model of PCI. Hyper Transport was invented by AMD and perfected with the help of several partners throughout the industry.

Hyper Transport was designed to support both CPU-to-CPU communications as well as CPU-to-I/O transfers, thus, it features very low latency. It provides up to 22.4 Gigabyte/second aggregate CPU to I/O or CPU to CPU bandwidth in a highly efficient chip-to-chip technology that replaces existing complex multi-level buses .Using enhanced 1.2 volt LVDS signaling reduces signal noise, using non-multiplexed lines cuts down on signal activity and using dual-data rate clocks lowers clock rates while increasing data throughput. . It employs a packet-based data protocol to eliminate many sideband (control and command) signals and supports asymmetric, variable width data paths.

New specifications are backward compatible with previous generations of specification, extending the investment made in one generation of Hyper Transport-enabled device to future generations. Hyper Transport devices are PCI software compatible, thus they require little or no software overhead. The technology targets networking, telecommunications, computers and embedded systems and any application where high speed, low latency and scalability are necessary.

Cylinder Deactivation

Added on: February 28th, 2012 by No Comments

With alternatives to the petrol engine being announced ever so often you could be forgiven for thinking that the old favorite the petrol engine is on its last legs but nothing could be further from the truth and possibilities for developing the petrol engines are endless. One of the most crucial jobs on the agenda is to find ways of reducing fuel consumption, cutting emissions of the green house gas CO2 and also the toxic emissions which threaten air quality. One such fast emerging technology is cylinder deactivation where a number of cylinders are shut down when less is needed to save fuel.

The simple fact is that when you only need small amounts of power such as crawling around town what you really need is a smaller engine. To put it another way an engine performs most efficiently when its working harder so ask it to do the work of an engine half its size and efficiency suffers. Pumping or throttling losses are mostly to blame. Cylinder deactivation is one of the technologies that improve fuel economy, the objective of which is to reduce engine pumping losses under certain vehicle operating conditions.

When a petrol engine is working with the throttle wide open pumping losses are minimal. But at part throttle the engine wastes energy trying to breathe through a restricted airway and the bigger engine, the bigger the problem. Deactivating half the cylinders at part load is much like temporarily fitting a smaller engine.

During World War II, enterprising car owners disconnected a spark plug wire or two in hopes of stretching their precious gasoline ration. Unfortunately, it didn’t improve gas mileage. Nevertheless, Cadillac resurrected the concept out of desperation during the second energy crisis. The “modulated displacement 6.0L V-8- 6-4” introduced in 1981 disabled two, then four cylinders during part-throttle operation to improve the gas mileage of every model in Cadillac’s lineup. A digital dash display reported not only range, average mpg, and instantaneous mpg, but also how many cylinders were operating. Customers enjoyed the mileage boost but not the
side effects. Many of them ordered dealers to cure their Cadillacs of the shakes and stumbles even if that meant disconnecting the modulated-displacement system.

Like wide ties, short skirts and $2-per-gallon gas, snoozing cylinders are back. General Motors, the first to show renewed interest in the idea, calls it Displacement on Demand (DoD). DaimlerChrysler, the first manufacturer to hit the U.S. market with a modern cylinder shut-down system calls its approach Multi- Displacement System (MDS). And Honda, who beat everyone to the punch by equipping Japanese-market Inspire models with cylinder deactivation last year, calls the approach Variable Cylinder Management (VCM)

The motivation is the same as before — improved gas mileage. Disabling cylinders finally makes sense because of the strides achieved in electronic power train controls. According to GM, computing power has been increased 50-fold in the past two decades and the memory available for control algorithms is 100 times greater. This time around, manufacturers expect to disable unnecessary cylinders so seamlessly that the driver never knows what’s happening under the hood.

Camless Engine

Added on: February 28th, 2012 by No Comments

The cam has been an integral part of the IC engine from its invention. The cam controls the “breathing channels” of the IC engines, that is, the valves through which the fuel air mixture (in SI engines) or air (in CI engines) is supplied and exhaust driven out. Besieged by demands for better fuel economy, more power, and less pollution, motor engineers around the world are pursuing a radical “camless” design that promises to deliver the internal – combustion engine’s biggest efficiency improvement in years. The aim of all this effort is liberation from a constraint that has handcuffed performance since the birth of the internal-combustion engine more than a century ago. Camless engine technology is soon to be a reality for commercial vehicles. In the camless valve train, the valve motion is controlled directly by a valve actuator – there’s no camshaft or connecting mechanisms .Precise electrohydraulic camless valve train controls the valve operations, opening, closing etc.

The seminar looks at the working of the electrohydraulic camless engine, its general features and benefits over conventional engines. The engines powering today’s vehicles, whether they burn gasoline or diesel fuel, rely on a system of valves to admit fuel and air to the cylinders and let exhaust gases escape after combustion. Rotating steel camshafts with precision-machined egg-shaped lobes, or cams, are the hard-tooled “brains” of the system. They push open the valves at the proper time and guide their closure, typically through an arrangement of pushrods, rocker arms, and other hardware. Stiff springs return the valves to their closed position. In an overhead-camshaft engine, a chain or belt driven by the crankshaft turns one or two camshafts located atop the cylinder head.
A single overhead camshaft (SOHC) design uses one camshaft to move rockers that open both inlet and exhaust valves. The double overhead camshaft (DOHC), or twin-cam, setup does away with the rockers and devotes one camshaft to the inlet valves and the other to the exhaust valves.

Darknet

Added on: February 28th, 2012 by No Comments

This paper outlines a migration path towards universal broadband connectivity, motivated by the design of a wireless store-and-forward communications network.

We argue that the cost of real-time, circuit-switched communications is sufficiently high that it may not be the appropriate starting point for rural connectivity. Based on market data for information and communication technology (ICT) services in rural India, we propose a combination of wireless technology with an asynchronous mode of communications to offer a means of introducing ICTs with:

  • affordability and practicality for end users
  • a sustainable cost structure for operators and investors
  • a smooth migration path to universal broadband

connectivity.

A summary of results and data are given for an operational pilot test of this wireless network in Karnataka, India, beginning in March 2003.
We also briefly discuss the economics and policy considerations for deploying this type of network in the context of rural connectivity.

Adaptive Cruise Control

Added on: February 28th, 2012 by No Comments

Mentally, driving is a highly demanding activity – a driver must maintain a high level of concentration for long periods and be ready to react within a split second to changing situations. In particular, drivers must constantly assess the distance and relative speed of vehicles in front and adjust their own speed accordingly.
Those tasks can now be performed by Adaptive Cruise Control (ACC) system, which is an extension of the conventional cruise control system.

Like a conventional cruise control system, ACC keeps the vehicle at a set constant speed. The significant difference, however, is that if a car with ACC is confronted with a slower moving vehicle ahead, it is automatically slowed down and then follows the slower vehicle at a set distance. Once the road ahead is clear again, the ACC accelerates the car back to the previous set cruising speed. In that way, ACC integrates a vehicle harmoniously into the traffic flow.

Common Synthetic Plastics

Added on: February 28th, 2012 by No Comments

Plastic molecules are made of long chains of repeating units called monomers. The atoms that make up a plastic’s monomers and the arrangement of the monomers within the molecule both determine many of the plastic’s properties. Plastics are one of the classification of polymers .If a polymer is shaped into hard and tough utility articles by the application of heat and pressure ,it is used as “plastic”.

Synthetic polymers are often referred to as “plastics”, such as the well-known polyethylene and nylon. However, most of them can be classified in at least three main categories: thermoplastics, thermosets and elastomers.

Man-made polymers are used in a bewildering array of applications: food packaging, films, fibers, tubing, pipes, etc. The personal care industry also uses polymers to aid in texture of products, binding etc.

4G Wireless Technology

Added on: February 27th, 2012 by 1 Comment

Pick up any newspaper today and it is a safe bet that you will find an article somewhere relating to mobile communications. If it is not in the technology section it will almost certainly be in the business section and relate to the increasing share prices of operators or equipment manufacturers, or acquisitions and take-overs thereof. Such is the pervasiveness of mobile communications that it is affecting virtually everyone’s life and has become a major political topic and a significant contributor to national gross domestic product (GDP).

The major driver to change in the mobile area in the last ten years has been the massive enabling implications of digital technology, both in digital signal processing and in service provision. The equivalent driver now, and in the next five years, will be the all pervasiveness of software in both networks and terminals. The digital revolution is well underway and we stand at the doorway to the software revolution. Accompanying these changes are societal developments involving the extensions in the use of mobiles. Starting out from speech-dominated services we are now experiencing massive growth in applications involving SMS (Short Message Service) together with the start of Internet applications using WAP (Wireless Application Protocol) and i-mode. The mobile phone has not only followed the watch, the calculator and the organiser as an essential personal accessory but has subsumed all of them. With the new Internet extensions it will also lead to a convergence of the PC, hi-fl and television and provide mobility to facilities previously only available on one network.

Gasoline Direct Injection

Added on: February 27th, 2012 by No Comments

In recent years, legislative and market requirements have driven the need to reduce fuel consumption while meeting increasingly stringent exhaust emissions. This trend has dictated increasing complexity in automotive engines and new approaches to engine design. A key research objective for the automotive engineering community has been the potential combination of gasoline-engine specific power with diesel-like engine efficiency in a cost-competitive, production-feasible power train. One promising engine development route for achieving these goals is the potential application of lean burn direct injection (DI) for gasoline engines. In carburetors the fuel is sucked due to the pressure difference caused by the incoming air. This will affect the functioning of the carburetor when density changes in air are appreciable. There was a brief period of electronically controlled carburetor, but it was abandoned due to its complex nature. On the other hand in fuel injection the fuel is injected into the air.

Fluid Amplifiers

Added on: February 27th, 2012 by No Comments

When one stream of fluid is permitted to impinge on another, direction of flow changes and the tendency of a fluid to strike the wall also changes. This concept gives rise to a new engineering system known as ‘fluidics. The term fluidics is the contraction of the words fluid and logic. Tremendous progress has been made in last twenty years in design and application of fluidic devices.

The current interest in fluidics for logic and control function was launched by the U.S Army’sHarry Diamond Laboratories. In March 1960 this laboratories invented the first fluid amplifier. This work was later expanded through a series of research and development contracts and the work reported in this section was sponsored by the U.S Airforce.The environmental capability of fluidic devices permits direct measurement of required control parameters within the engine.

These devices are more economical, faster and smaller than hydraulic control elements employing moving parts such as valves etc. Fluid devices have no moving parts hence they are more reliable and have long life. Fluidics is now offering an alternative to some other devices being operated with the help of electronics. It can operate where electronic devices are unsatisfactory, such as high temperature, humidity, in presence of severe vibrations, in high fire risk or where ionizing radiations are presents.

Electro Chemical Machining

Added on: February 26th, 2012 by No Comments

Electro chemical machining (ECM) is the controlled removal of metal by anodic dissolution in an electrolytic medium in which the work piece is the anode & the tool is the cathode.
Working: Two electrodes are placed at a distance of about 0.5mm & immersed in an electrolyte, which is a solution of sodium chloride. When an electrical potential of about 20V is applied between the electrodes, the ions existing in the electrodes migrate toward the electrodes.

Positively charged ions are attracted towards the cathode & negatively charged towards the anode. This initiates the flow of current in the electrolyte. The electrolysis process that takes place at the cathode liberates hydroxyl ions & free hydrogen. The hydroxyl ion combines with the metal ions of anode to form insoluble metal hydroxides &the material is thus removed from the anode. This process continues and the tool reproduces its shape in the work piece (anode). The high current densities promote rapid generation of metal hydroxides and gas bubble in the small spacing between the electrodes. These become a barrier to the electrolyzing current after a few seconds. To maintain a continuous high density current, these products have to be removed continuously. This is achieved by circulating the electrolyte at high velocity through the gap between the electrodes. It is also to be noted that the machining gap size increases. Therefore to maintain a constant gap the cathode should be advanced towards the anode at the same rate at which the material is removed.

Chameleon Chips

Added on: February 26th, 2012 by No Comments

Today’s microprocessors sport a general-purpose design which has its own advantages and disadvantages.

  • Adv: One chip can run a range of programs. That’s why you don’t need separate computers for different jobs, such as crunching spreadsheets or editing digital photos
  • Disadv: For any one application, much of the chip’s circuitry isn’t needed, and the presence of those “wasted” circuits slows things down.

Suppose, instead, that the chip’s circuits could be tailored specifically for the problem at hand–say, computer-aided design–and then rewired, on the fly, when you loaded a tax-preparation program. One set of chips, little bigger than a credit card, could do almost anything, even changing into a wireless phone. The market for such versatile marvels would be huge, and would translate into lower costs for users.

So computer scientists are hatching a novel concept that could increase number-crunching power–and trim costs as well. Call it the chameleon chip.

Chameleon chips would be an extension of what can already be done with field-programmable gate arrays (FPGAS).

An FPGA is covered with a grid of wires. At each crossover, there’s a switch that can be semipermanently opened or closed by sending it a special signal. Usually the chip must first be inserted in a little box that sends the programming signals. But now, labs in Europe, Japan, and the U.S. are developing techniques to rewire FPGA-like chips anytime–and even software that can map out circuitry that’s optimized for specific problems.

The chips still won’t change colors. But they may well color the way we use computers in years to come. it is a fusion between custom integrated circuits and programmable logic.in the case when we are doing highly performance oriented tasks custom chips that do one or two things spectacularly rather than lot of things averagely is used. Now using field programmed chips we have chips that can be rewired in an instant. Thus the benefits of customization can be brought to the mass market.

A reconfigurable processor is a microprocessor with erasable hardware that can rewire itself dynamically. This allows the chip to adapt effectively to the programming tasks demanded by the particular software they are interfacing with at any given time. Ideally, the reconfigurable processor can transform itself from a video chip to a central processing unit (cpu) to a graphics chip, for example, all optimized to allow applications to run at the highest possible speed. The new chips can be called a “chip on demand.” In practical terms, this ability can translate to immense flexibility in terms of device functions. For example, a single device could serve as both a camera and a tape recorder (among numerous other possibilities): you would simply download the desired software and the processor would reconfigure itself to optimize performance for that function.

Reconfigurable processors, competing in the market with traditional hard-wired chips and several types of programmable microprocessors. Programmable chips have been in existence for over ten years. Digital signal processors (DSPs), for example, are high-performance programmable chips used in cell phones, automobiles, and various types of music players.

Another version, programmable logic chips are equipped with arrays of memory cells that can be programmed to perform hardware functions using software tools. These are more flexible than the specialized DSP chips but also slower and more expensive. Hard-wired chips are the oldest, cheapest, and fastest – but also the least flexible – of all the options.

Space Robotics

Added on: February 26th, 2012 by No Comments

Robot is a system with a mechanical body, using computer as its brain. Integrating the sensors and actuators built into the mechanical body, the motions are realised with the computer software to execute the desired task. Robots are more flexible in terms of ability to perform new tasks or to carry out complex sequence of motion than other categories of automated manufacturing equipment. Today there is lot of interest in this field and a separate branch of technology ‘robotics’ has emerged. It is concerned with all problems of robot design, development and applications. The technology to substitute or subsidise the manned activities in space is called space robotics. Various applications of space robots are the inspection of a defective satellite, its repair, or the construction of a space station and supply goods to this station and its retrieval etc. With the over lap of knowledge of kinematics, dynamics and control and progress in fundamental technologies it is about to become possible to design and develop the advanced robotics systems. And this will throw open the doors to explore and experience the universe and bring countless changes for the better in the ways we live.

Optical Computers

Added on: February 26th, 2012 by No Comments

Computers have enhanced human life to a great extent.The goal of improving on computer speed has resulted in the development of the Very Large Scale Integration (VLSI) technology with smaller device dimensions and greater complexity.

VLSI technology has revolutionized the electronics industry and additionally, our daily lives demand solutions to increasingly sophisticated and complex problems, which requires more speed and better performance of computers.

For these reasons, it is unfortunate that VLSI technology is approaching its fundamental limits in the sub-micron miniaturization process. It is now possible to fit up to 300 million transistors on a single silicon chip. As per the Moore’¬s law it is also estimated that the number of transistor switches that can be put onto a chip doubles every 18 months. Further miniaturization of lithography introduces several problems such as dielectric breakdown, hot carriers, and short channel effects. All of these factors combine to seriously degrade device reliability. Even if developing technology succeeded in temporarily overcoming these physical problems, we will continue to face them as long as increasing demands for higher integration continues. Therefore, a dramatic solution to the problem is needed, and unless we gear our thoughts toward a totally different pathway, we will not be able to further improve our computer performance for the future.

Optical interconnections and optical integrated circuits will provide a way out of these limitations to computational speed and complexity inherent in conventional electronics. Optical computers will use photons traveling on optical fibers or thin films instead of electrons to perform the appropriate functions. In the optical computer of the future, electronic circuits and wires will be replaced by a few optical fibers and films, making the systems more efficient with no interference, more cost effective, lighter and more compact. Optical components would not need to have insulators as those needed between electronic components because they don’t experience cross talk. Indeed, multiple frequencies (or different colors) of light can travel through optical components without interfacing with each others, allowing photonic devices to process multiple streams of data simultaneously.

Biochips

Added on: February 26th, 2012 by No Comments

Biochips were invented 9 years ago by gene scientist Stephen Fodor. In a flash of light he saw that photolithography, the process used to etch semi conductor circuits in to silicon could also be used to assemble particular DNA molecules on a chip.

The human body is the next biggest target of chip makers. medical researchers have been working since a long period to integrate humans body and chips . In no time or at maximum within a short period of time Biochips can get implanted into the body of humans . So integration of humans and chips is achieved this way .

Money and research has already gone into this area of technology .Anyway such implants are already being experimented with animals . A simple chip is being is being implanted into tens of thousands of animals especially pets.

Expert Systems

Added on: February 26th, 2012 by No Comments

An intelligent fault diagnosis and operator support system targeting in the safer operation of generators and distribution substations in power plants is introduced in this paper. Based on Expert Systems (ES) technology it incorporates a number of rules for the real time state estimation of the generator electrical part and the distribution substation topology. Within every sampling cycle the estimated state is being compared to an a priori state formed by measurements and digital signaling coming from current and voltage transformers as well as the existing electronic protection equipment. Whenever a conflict between the estimated and measured state arises, a set of heuristic rules is activated for the fault scenario inference and report. An included SCADA helps operators in the fast processing of large amounts of data, due to the user-friendly graphical representation of the monitored system. Enhanced with many heuristic rules, being a knowledge based system, the proposed system goes beyond imitation of expert operators’ knowledge, being able to inference fault scenarios concerning even components like the power electronic circuits of generator excitation system. For example, abnormal measurements on generator’s terminals can activate rules that will generate fault hypothesis possibly related to an excitation thyristors abnormal switching operation.

Interstate Hydrogen Highway

Added on: February 26th, 2012 by No Comments

Interstate Hydrogen Highway was brought up by Justin Eric Sutton. This highway mainly depends on hydrogen and water. Hydrogen is obtained in the basic process that produces electricity when sunlight striking EPV (electro photo voltaic panels).panels is then used to convert distilled water into hydrogen and oxygen .while the oxygen could be bottled and sold cheaply the hydrogen would serve as a “battery” store in compressed form in cooling tanks adjacent to the traveler system in utility centers. Electricity is produced by hydrogen using hydrogen fuel cell technology. Electricity generated in hydrogen highway by Magnetic Levitation (MAGLEV) technology may be used to provide for other power needs such as utility stations, access stations lightning and maintenance and rest can be used for domestic usage.

A certain amount of hydrogen would be stored each day to encompass night time travel and weather related burdens. Speed of trailblazer in hydrogen highway is 250-300 MPH. all it takes is “$1,50,00,000 per mile , and $2,50,000 per Rail Car. With an eventual system size of nearly 54,000 miles would yield as much as 45 billion watts of continuous electrical power.

Quantum Mirage

Added on: February 26th, 2012 by No Comments

Since it first appeared on the cover of Nature in February 2000, the quantum mirage” has featured on posters, calendars, websites and the covers of various books and magazines. The image – which was obtained using a scanning tunnelling microscope – shows the electronic wavefunctions inside an elliptical “quantum corral” made of cobalt atoms on a copper surface. It was created by Hari Manoharan, Christopher Lutz and Don Eigler of the IBM Almaden Research Center in California. In 1990, working with Erhard Schweizer, Eiger spelt out the letters “IBM” using 35 xenon atoms. And three years later, working with Lutz and Michael Crommie, he released the first images of the “quantum corral”, which have also been reproduced in numerous places.

The quantum mirage uses the wave nature of electrons to move the information, instead of a wire, so it has the potential to enable data transfer within future nano-scale electronic circuits so small that conventional wires do not work. It will be years before this technology becomes practical, but it could eventually yield computers that are many orders of magnitude smaller, faster, and less power-hungry than anything we can conceive today.

Rapid Prototyping

Added on: February 26th, 2012 by No Comments

The term rapid prototyping (RP) refers to a class of technologies that can automatically construct physical models from Computer-Aided Design (CAD) data. It is also called Desktop Manufacturing or Freeform Fabrication. These technologies enable us to make even complex prototypes that act as an excellent visual aid to communicate with co-workers and customers. These prototypes are also used for design testing.

Why Rapid Prototyping?

  • Objects can be formed with any geometric complexity or intricacy without the need for elaborate machine set-up or final assembly.
  • Freeform fabrication systems reduce the construction of complex objects to a manageable, straightforward, and relatively fast process.
  • These techniques are currently being advanced further to such an extend that they can be used for low volume economical production of parts.
  • It significantly cut costs as well as development times.

Air Car

Added on: February 26th, 2012 by No Comments

The Air car is a car currently being developed and, eventually, manufactured by Moteur Developpement International (MDI), founded by the French inventor Guy Nègre. It will be sold by this company too, as well as by ZevCat, a US company, based in California.

The air car is powered by an air engine, specifically tailored for the car. The used air engine is being manufactured by CQFD Air solution, a company closely linked to MDI.

The engine is powered by compressed air, stored in a glass or carbon-fibre tank at 4500 psi. The engine has injection similar to normal engines, but uses special crankshafts and pistons, which remain at top dead center for about 70% of the engine’s cycle; this allows more power to be developed in the engine.

Though some consider the car to be pollution-free, it must be taken into account that the tanks are recharged using electric (or gasoline) compressors, resulting in some pollution, if the electricity used to operate the compressors comes from polluting power plants (such as gas-, or coal-power plants). Solar power could possibly be used to power the compressors at fuel station.

Crusoe Processor

Added on: February 26th, 2012 by No Comments

Mobile computing has been the buzzword for quite a long time. Mobile computing devices like laptops, notebook PCs etc are becoming common nowadays. The heart of every PC whether a desktop or mobile PC is the microprocessor. Several microprocessors are available in the market for desktop PCs from companies like Intel, AMD, Cyrix etc. The mobile computing market has never had a microprocessor specifically designed for it. The microprocessors used in mobile PCs are optimized versions of the desktop PC microprocessor.

Mobile computing makes very different demands on processors than desktop computing. Those desktop PC processors consume lots of power, and they get very hot. When you’re on the go, a power-hungry processor means you have to pay a price: run out of power before you’ve finished, or run through the airport with pounds of extra batteries. A hot processor also needs fans to cool it, making the resulting mobile computer bigger, clunkier and noisier. The market will still reject a newly designed microprocessor with low power consumption if the performance is poor. So any attempt in this regard must have a proper ‘performance-power’ balance to ensure commercial success. A newly designed microprocessor must be fully x86 compatible that is they should run x86 applications just like conventional x86 microprocessors since most of the presently available software has been designed to work on x86 platform.

Crusoe is the new microprocessor, which has been designed specially for the mobile computing market .It has been, designed after considering the above-mentioned constraints. A small Silicon Valley startup company called Transmeta Corp developed this microprocessor.

The concept of Crusoe is well understood from the simple sketch of the processor architecture, called ‘amoeba’. In this concept, the x86 architecture is an ill-defined amoeba containing features like segmentation, ASCII arithmetic, variable-length instructions etc. Thus Crusoe was conceptualized as a hybrid microprocessor, i.e. it has a software part and a hardware part with the software layer surrounding the hardware unit. The role of software is to act as an emulator to translate x86 binaries into native code at run time. Crusoe is a 128-bit microprocessor fabricated using the CMOS process. The chip’s design is based on a technique called VLIW to ensure design simplicity and high performance. The other two technologies using are Code Morphing Software and LongRun Power Management. The crusoe hardware can be changed radically without affecting legacy x86 software: For the initial Transmeta products, models TM3120 and TM5400, the hardware designers opted for minimal space and power.

Mine Detection Using Radar Bullets

Added on: February 26th, 2012 by No Comments

Land mines are a case of serious threats to the life of civilians, especially in mine-affected countries like Afghanistan and Iraq .The mines which are implanted during the war time may remain undetected for several decades and may suddenly be activated after that. There are several methods for detection of land mines, such as metal detection and explosive detection. These ways of detection are dangerous because they are done very close to the mine.

A safe method for detecting land mines is “mine detection using radar bullets”. As the name suggests detection is done using Radar Bullets and hence can be done further away from the mine. Detection is usually done from helicopters. Researchers are being conducted to overcome certain inefficiencies of this method.

iDEN

Added on: February 25th, 2012 by No Comments

iDEN is a mobile telecommunications technology, developed by Motorola, which provides its users the benefits of a trunked radio and a cellular telephone. iDEN places more users in a given spectral space, compared to analog cellular and two-way radio systems, by using speech compression and time division multiple access TDMA. Notably, iDEN is designed, and licensed, to operate on individual frequencies that may not be contiguous. iDEN operates on 25kHz channels, but only occupies 20 kHz in order to provide interference protection via guard bands. By comparison, TDMA Cellular (IS-54 and IS-136) is licensed in blocks of 30 kHz channels, but each emission occupies 40 kHz,and is capable of serving the same number of subscribers per channel as iDEN. iDEN supports either three or six interconnect users (phone users) per channel, and either six or twelve dispatch users (push-to-talk users) per channel. Since there is no Analogue component of iDEN, mechanical duplexing in the handset is unnecessary, so Time Domain Duplexing is used instead, the same way that other digital-only technologies duplex their handsets. Also, like other digital-only technologies, hybrid or cavity duplexing is used at the Base Station (Cellsite).

Microwave Superconductivity

Added on: February 25th, 2012 by No Comments

Superconductivity is a phenomenon occurring in certain materials generally at very low temperatures, characterized by exactly zero electrical resistance and the exclusion of the interior magnetic field (the Meissner effect). It was discovered by Heike Kamerlingh Onnes in 1911. Applying the principle of Superconductivity in microwave and millimeter-wave (mm-wave) regions, components with superior performance can be fabricated. Major problem during the earlier days was the that the cryogenic burden has been perceived as too great compared to the performance advantage that could be realized. There were very specialized applications, such as low-noise microwave and mm-wave mixers and detectors, for the highly demanding radio astronomy applications where the performance gained was worth the effort and complexity. With the discovery of high temperature superconductors like copper oxide, rapid progress was made in the field of microwave superconductivity.

This topic describes the properties of superconductivity that can be exploited in microwave and mm-wave technologies to yield components with appreciable performance enhancement over conventional systems. Superconducting signal transmission lines can yield low attenuation, zero-dispersion signal transmission behavior for signals with frequency components less than about one tenth the superconducting energy gap. No other known microwave device technology can provide a similar behavior. Superconductors have also been used to make high speed digital circuits, josephsons junction and RF and microwave filters for mobile phone base stations.

Beamed Power Transmission

Added on: February 24th, 2012 by No Comments

The desire of the modern man for more and more amenities and sophistication led to the unscrupulous exploitation of natural treasure. Though nature has provided abundant source of resources. It is not unlimited. Hence the exhaustion of the natural resources is eminent. The only exception to this is sunlight.

Scientist who had understood the naked truth had thought of exploiting the solar energy and started experimenting in this direction even from 1970. But the progress was very slow. Much headway is yet to be made in this direction. However as the impotents source of non-conventional energy and due to the limited source of conventional energy emphasis has given for the better utilization of solar energy.

But the application of solar cell, photovoltaic cell etc, we are able to concert only a small percentage of solar energy into electrical energy. But by using beamed power transmission from solar power satellite we can envisage a higher percentage of conversion. By beamed power transmission, we can extend the present system of two dimensional transmission network to three dimensional, if does not have any environmental problem as well.

Memristors

Added on: February 24th, 2012 by No Comments

Generally when most people think about electronics, they may initially think of products such as cell phones, radios, laptop computers, etc. others, having some engineering background, may think of resistors, capacitors, etc. which are the basic components necessary for electronics to function. Such basic components are fairly limited in number and each having their own characteristic function.

Memristor theory was formulated and named by Leon Chua in a 1971 paper. Chua strongly believed that a fourth device existed to provide conceptual symmetry with the resistor, inductor, and capacitor. This symmetry follows from the description of basic passive circuit elements as defined by a relation between two of the four fundamental circuit variables. A device linking charge and flux (themselves defined as time integrals of current and voltage), which would be the memristor, was still hypothetical at the time. However, it would not be until thirty-seven years later, on April 30, 2008, that a team at HP Labs led by the scientist R. Stanley Williams would announce the discovery of a switching memristor. Based on a thin film of titanium dioxide, it has been presented as an approximately ideal device.
The reason that the memristor is radically different from the other fundamental circuit elements is that, unlike them, it carries a memory of its past. When you turn off the voltage to the circuit, the memristor still remembers how much was applied before and for how long. That’s an effect that can’t be duplicated by any circuit combination of resistors, capacitors, and inductors, which is why the memristor qualifies as a fundamental circuit element.
The arrangement of these few fundamental circuit components form the basis of almost all of the electronic devices we use in our everyday life. Thus the discovery of a brand new fundamental circuit element is something not to be taken lightly and has the potential to open the door to a brand new type of electronics. HP already has plans to implement memristors in a new type of non-volatile memory which could eventually replace flash and other memory systems.

Blue Gene

Added on: February 24th, 2012 by No Comments

Blue Gene/L (BG/L) is a 64K (65,536) node scientific and engineering supercomputer that IBM is developing with partial funding from the United States Department of Energy. This paper describes one of the primary BG/L interconnection networks, a three dimensional torus. We describe a parallel performance simulator that was used extensively to help architect and design the torus network and present sample simulator performance studies that contributed to design decisions. In addition to such studies, the simulator was also used during the logic verification phase of BG/L for performance verification, and its use there uncovered a bug in the VHDL implementation of one
of the arbiters. Blue Gene/L (BG/L) is a scientific and engineering, message-passing, supercomputer that IBM is developing with partial funding from the U.S. Department of Energy Lawrence Livermore National Laboratory. A 64K node system is scheduled to be delivered to Livermore, while a 20K node system will be installed at the IBM T.J. Watson Research Center for use in life sciences computing, primarily protein folding. A more complete overview of BG/L may be found in [1], but we briefly describe the primary features of the machine.

Image Authentication Techniques

Added on: February 24th, 2012 by No Comments

This paper explores the various techniques used to authenticate the visual data recorded by the automatic video surveillance system. Automatic video surveillance systems are used for continuous and effective monitoring and reliable control of remote and dangerous sites. Some practical issues must be taken in to account, in order to take full advantage of the potentiality of VS system. The validity of visual data acquired, processed and possibly stored by the VS system, as a proof in front of a court of law is one of such issues.

But visual data can be modified using sophisticated processing tools without leaving any visible trace of the modification. So digital or image data have no value as legal proof, since doubt would always exist that they had been intentionally tampered with to incriminate or exculpate the defendant. Besides, the video data can be created artificially by computerized techniques such as morphing. Therefore the true origin of the data must be indicated to use them as legal proof. By data authentication we mean here a procedure capable of ensuring that data have not been tampered with and of indicating their true origin.

Hovercraft

Added on: February 24th, 2012 by No Comments

Vehicles designed to travel close to but above ground or water. These vehicles are supported in various ways. Some of them have a specially designed wing that will lift them just off the surface over which they travel when they have reached a sufficient horizontal speed (the ground effect). Hovercrafts are usually supported by fans that force air down under the vehicle to create lift, Air propellers, water propellers, or water jets usually provide forward propulsion. Air-cushion vehicles can attain higher speeds than can either ships or most land vehicles and use much less power than helicopters of the same weight. Air-cushion suspension has also been applied to other forms of transportation, in particular trains, such as the French Aero train and the British hover train.

Hovercraft is a transportation vehicle that rides slightly above the earth’s surface. The air is continuously forced under the vehicle by a fan, generating the cushion that greatly reduces friction between the moving vehicle and surface. The air is delivered through ducts and injected at the periphery of the vehicle in a downward and inward direction. This type of vehicle can equally ride over ice, water, marsh, or relatively level land.

Access Premium Seminar Reports: Subscribe Now



Sign Up for comprehensive seminar reports & presentations: DOCX, PDF, PPTs.