Hello Guest. Sign Up to view and download full seminar reports               

SEMINAR TOPICS CATEGORY

Engineering Topics Category

Flying Windmills

Added on: March 4th, 2012 by Afsal Meerankutty No Comments

High Altitude Wind Power uses flying electric generator (FEG) technology in the form of what have been more popularly called flying windmills, is a proposed renewable energy project over rural or low-populated areas, to produce around 12,000 MW of electricity with only 600 well clustered rotorcraft kites that use only simple autogyro physics to generate far more kinetic energy than a nuclear plant can.

According to Sky WindPower; the overuse of fossil fuels and the overabundance of radioactive waste from nuclear energy plants is taking our planet once again down a path of destruction, for something that is more expensive and far more dangerous in the long run. FEG technology is just cheaper, cleaner and can provide more energy than those environmentally unhealthy methods of the past, making it a desirable substitute/alternative.

The secret to functioning High Altitude Wind Power is efficient tether technology that reaches 15,000 feet in the air, far higher than birds will fly, but creating restricted airspace for planes and other aircraft.

The same materials used in the tethers that hold these balloons in place can also hold flying windmills in place; and with energy cable technology getting ever lighter and stronger .Flying windmills appear to be 90 percent more energy efficient in wind tunnel tests than their land-based counterparts; that is three times more efficiency due to simple yet constantly abundant and effective high altitude wind power, available only 15,000 feet in the air by way of clustered rotor craft kites tethered with existing anti-terrorist technologies like those used on the Mexican/American border radar balloons.

High Altitude Wind Power offers itself as a clean and more powerful source of power generation than anything available on-the-grid at present and if Sky WindPower Corp. has their way, FEG technology and flying windmills will take the lead of a more sustainable future within the decade.

Smart Note Taker

Added on: March 4th, 2012 by Afsal Meerankutty No Comments

The Smart NoteTaker is such a helpful product that satisfies the needs of the people in today’s technologic and fast life. This product can be used in many ways. The Smart NoteTaker provides taking fast and easy notes to people who are busy one’s self with something. With the help of Smart NoteTaker, people will be able to write notes on the air, while being busy with their work. The written note will be stored on the memory chip of the pen, and will be able to read in digital medium after the job has done. This will save time and facilitate life.

The Smart NoteTaker is good and helpful for blinds that think and write freely. Another place, where our product can play an important role, is where two people talks on the phone. The subscribers are apart from each other while their talk, and they may want to use figures or texts to understand themselves better. It’s also useful especially for instructors in presentations. The instructors may not want to present the lecture in front of the board. The drawn figure can be processed and directly sent to the server computer in the room. The server computer then can broadcast the drawn shape through network to all of the computers which are present in the room. By this way, the lectures are aimed to be more efficient and fun. This product will be simple but powerful. The product will be able to sense 3D shapes and motions that user tries to draw. The sensed information will be processed and transferred to the memory chip and then will be monitored on the display device. The drawn shape then can be broadcasted to the network or sent to a mobile device.

There will be an additional feature of the product which will monitor the notes, which were taken before, on the application program used in the computer. This application program can be a word document or an image file. Then, the sensed figures that were drawn onto the air will be recognized and by the help of the software program we will write, the desired character will be printed in the word document. If the application program is a paint related program, then the most similar shape will be chosen by the program and then will be printed on the screen.

Since, JAVA Applet is suitable for both the drawings and strings, all these applications can be put together by developing a single JAVA program. The JAVA code that we will develop will also be installed on the pen so that the processor inside the pen will type and draw the desired shape or text on the display panel.

Speech Recognition

Added on: March 4th, 2012 by Afsal Meerankutty No Comments

Language is human beings most important means of communication and speech its primary medium. Speech provides an international forum for communication among researchers in the disciplines that contribute to our understanding of the production, perception, processing, learning and use. Spoken interaction both between human interlocutors and between humans and machines is inescapably embedded in the laws and conditions of Communication, which comprise the encoding and decoding of meaning as well as the mere transmission of messages over an acoustical channel. Here we deal with this interaction between the man and machine through synthesis and recognition applications.
The paper dwells on the speech technology and conversion of speech into analog and digital waveforms which is understood by the machines.

Speech recognition, or speech-to-text, involves capturing and digitizing the sound waves, converting them to basic language units or phonemes, constructing words from phonemes, and contextually analyzing the words to ensure correct spelling for words that sound alike. Speech Recognition is the ability of a computer to recognize general, naturally flowing utterances from a wide variety of users. It recognizes the caller’s answers to move along the flow of the call.
We have emphasized on the modeling of speech units and grammar on the basis of Hidden Markov Model. Speech Recognition allows you to provide input to an application with your voice. The applications and limitations on this subject has enlightened us upon the impact of speech processing in our modern technical field.
While there is still much room for improvement, current speech recognition systems have remarkable performance. We are only humans, but as we develop this technology and build remarkable changes we attain certain achievements. Rather than asking what is still deficient, we ask instead what should be done to make it efficient….

Precision Engineering and Practice

Added on: March 4th, 2012 by Afsal Meerankutty No Comments

There are three terms often used in precision practices and they are often used incorrectly or in a vague manner. The terms are accuracy, repeatability, and resolution. Because the present discussion is on machining and fabrication methods, the definitions will be in terms related to machine tools. However, these terms have applicability to metrology, instrumentation, and experimental procedures, as well.

Precision engineering deals with many sources of error and its solution. Precision is the most important think in the manufacturing field. Machining is the important part of manufacturing process. Many factor like feedback variables, machine tool variables, spindle variabls,wokpice vaiabls,envronmantal effect thermal errors etc.. affect the accuracy of machine. Main goal of precision engineering is to reduce the uncertainty of dimensions. Achieve the exact dimension is vary difficult . So tolerance is allowed on work piece.

High Efficiency Miller Cycle Gas Engine

Added on: March 4th, 2012 by Afsal Meerankutty No Comments

The output of gas engines for cogeneration mainly ranges from 100 to 1 000 kW. The present gas engines are broadly classified in two types: lean-burn system (1) and stoichiometric air-fuel ratio combustion system, with the lower output engines using the stoichiometric Air-fuel ratio combustion system while the medium and large size engines adopting the lean-burn system. The lean-burn system generally features in high generating efficiency and low NOx emission in addition to the excellent endurance of the low-temperature combustion flame.

Mitsubishi Heavy Industries, Ltd. (MHI) and Osaka Gas Co., Ltd. have jointly applied the Miller cycle to a lean-burn gas engine to develop the world’s first gas engine in this class with the generating efficiency standing at 40%. With the 280 kW engine released commercially in April 2000 after having cleared the endurance test over 4 000 hours, this paper describes the main technologies and performance specifications for this engine as well as for the series of engines planned in the future.

Nanotechnology in Mechanical Engineering

Added on: March 4th, 2012 by Afsal Meerankutty No Comments

We live in a world of machines. And the technical foundation for these machines lies in the steam engine developed during the 1780s by James Watt. The concept of deriving useful mechanical work from raw fuel such as wood, coal, oil, and now uranium was revolutionary. Watt also developed the slider-crank mechanism to convert reciprocating motion to rotary motion.

To improve on this first, basic engine, the people who followed Watt created the science of thermodynamics and perfected power transmission through gears, cams, shafts, bearings, and mechanical seals. A new vocabulary involving heat, energy, power, and torque was born with the steam engine.

MEMS Switches

Added on: March 4th, 2012 by Afsal Meerankutty No Comments

Compound solid state switches such as GaAs MESFETs and PIN diodes are widely used in microwave and millimeter wave integrated circuits (MMICs) for telecommunications applications including signal routing, impedance matching networks, and adjustable gain amplifiers. However, these solid-state switches have a large insertion loss (typically 1 dB) in the on state and poor electrical isolation in the off state. The recent developments of micro-electro-mechanical systems (MEMS) have been continuously providing new and improved paradigms in the field of microwave applications. Different configured micro machined miniature switches have been reported. Among these switches, capacitive membrane microwave switching devices present lower insertion loss, higher isolation, better nonlinearity and zero static power consumption. In this presentation, we describe the design, fabrication and performance of a surface micro machined capacitive microwave switch on glass substrate using electroplating techniques.

Landmine Detection

Added on: March 4th, 2012 by Afsal Meerankutty No Comments

Landmines and unexploded ordnance (UXO) are a legacy of war, insurrection, and guerilla activity. Landmines kill and maim approximately 26,000 people annually. In Cambodia, whole areas of arable land cannot be farmed due to the threat of landmines. United Nations relief operations are made more difficult and dangerous due to the mining of roads. Current demining techniques are heavily reliant on metal detectors and prodders.

Technologies are used for landmine detection are:

  • • Metal detectors— capable of finding even low-metal content mines in mineralized soils.
    • Nuclear magnetic resonance, fast neutron activation and thermal neutron activation.
    • Thermal imaging and electro-optical sensors— detect evidence of buried objects.
    • Biological sensors such as dogs, pigs, bees and birds.
    • Chemical sensors such as thermal fluorescence— detect airborne and waterborne presence of explosive vapors.

In this discussion, we will concentrate on Ground Penetrating Radar (GPR). This ultra wide band radar provides centimeter resolution to locate even small targets. There are two distinct types of GPR, time-domain and frequency domain. Time domain or impulse GPR transmites discrete pulses of nanosecond duration and digitizes the returns at GHz sample rates. Frequency domain GPR systems transmit single frequencies either uniquely, as a series of frequency steps, or as a chirp. The amplitude and phase of the return signal is measured. The resulting data is converted to the time domain. GPR operates by detecting the dielectric contrasts in the soils, which allows it to locate even non metallic mines.

In this discussion we deal with buried anti-tank (AT) and anti-personnel (AP) landmines which require close approach or contact to activate. AT mines range from about 15 to 35 cm in size. They are typically buried up to 40cm deep, but they can also be deployed on the surface of a road to block a column of machinery. AP mines range from about 5 to 15cm in size. AT mines which are designed to impede the progress of destroy vehicles and AP mines which are designed to kill and maim people.

Intrusion Detection Systems

Added on: March 4th, 2012 by Afsal Meerankutty No Comments

An intrusion is an active sequence of related events that deliberately try to cause harm, such as rendering a system unusable, accessing unauthorized information or manipulating such information. To record the information about both successful and unsuccessful attempts, the security professionals place the devices that examine the network traffic, called sensors. These sensors are kept in both front of the firewall (the unprotected area) and behind the firewall (the protected area) and values through comparing the information recorded by the two.

An Intrusion Detection Systems(IDS) can be defined as the tool, methods and resources to help identity, access and report unauthorized activity. Intrusion Detection is typically one part of an overall protection system that is installed around a system or device. IDS work at the network layer of the OSI model and sensors are placed at the choke points on the network. They analyze packets to find specific patterns in the network traffic- if they find such a pattern in the traffic, an alert is logged and a response can be based on data recorded

Power Monitoring System

Added on: March 3rd, 2012 by Afsal Meerankutty 1 Comment

The microcontroller-based power monitoring system is an electronic device used to continuously monitor the parameters of power such as voltage, current, frequency, etc at the various points of an electric or electronics devices. The entire system is controlled by the microcontroller (80C31) and it is the real time monitoring of various parameters. Thus the name “REAL TIME MONITORING OF HIGH CAPACITY (400KVA-600KVA) BATTERY BACK UP SYSTEM”. This system is an 8-channel device, which accepts 8 analog input signals and consist of analog multiplexer, A/D converter, ROM, RAM, buffer etc. The different channels are selected by simple switch operation. From the UPS the channels and alarms are given to the microcontroller and it will process and control the parameters.

Neuro Chips

Added on: March 3rd, 2012 by Afsal Meerankutty No Comments

Until recently, neurobiologists have used computers for simulation, data collection, and data analysis, but not to interact directly with nerve tissue in live, behaving animals. Although digital computers and nerve tissue both use voltage waveforms to transmit and process information, engineers and neurobiologists have yet to cohesively link the electronic signaling of digital computers with the electronic signaling of nerve tissue in freely behaving animals.

Recent advances in microelectromechanical systems (MEMS), CMOS electronics, and embedded computer systems will finally let us link computer circuitry to neural cells in live animals and, in particular, to reidentifiable cells with specific, known neural functions. The key components of such a brain-computer system include neural probes, analog electronics, and a miniature microcomputer. Researchers developing neural probes such as sub- micron MEMS probes, microclamps, microprobe arrays, and similar structures can now penetrate and make electrical contact with nerve cells with out causing significant or long-term damage to probes or cells.

Researchers developing analog electronics such as low-power amplifiers and analog-to-digital converters can now integrate these devices with micro- controllers on a single low-power CMOS die. Further, researchers developing embedded computer systems can now incorporate all the core circuitry of a modern computer on a single silicon chip that can run on miniscule power from a tiny watch battery. In short, engineers have all the pieces they need to build truly autonomous implantable computer systems.

Until now, high signal-to-noise recording as well as digital processing of real-time neuronal signals has been possible only in constrained laboratory experiments. By combining MEMS probes with analog electronics and modern CMOS computing into self-contained, implantable Microsystems, implantable computers will free neuroscientists from the lab bench.

Artificial Eye

Added on: March 3rd, 2012 by Afsal Meerankutty No Comments

The retina is a thin layer of neural tissue that lines the back wall inside the eye. Some of these cells act to receive light, while others interpret the information and send messages to the brain through the optic nerve. This is part of the process that enables us to see. In damaged or dysfunctional retina, the photoreceptors stop working, causing blindness. By some estimates, there are more than 10 million people worldwide affected by retinal diseases that lead to loss of vision.

The absence of effective therapeutic remedies for retinitis pigmentosa (RP) and age-related macular degeneration (AMD) has motivated the development of experimental strategies to restore some degree of visual function to affected patients. Because the remaining retinal layers are anatomically spared, several approaches have been designed to artificially activate this residual retina and thereby the visual system.

At present, two general strategies have been pursued. The “Epiretinal” approach involves a semiconductor-based device placed above the retina, close to or in contact with the nerve fiber layer retinal ganglion cells. The information in this approach must be captured by a camera system before transmitting data and energy to the implant. The “Sub retinal” approach involves the electrical stimulation of the inner retina from the sub retinal space by implantation of a semiconductor-based micro photodiode array (MPA) into this location. The concept of the sub retinal approach is that electrical charge generated by the MPA in response to a light stimulus may be used to artificially alter the membrane potential of neurons in the remaining retinal layers in a manner to produce formed images.

Some researchers have developed an implant system where a video camera captures images, a chip processes the images, and an electrode array transmits the images to the brain. It’s called Cortical Implants.

Abrasive Jet Machining

Added on: March 3rd, 2012 by Afsal Meerankutty No Comments

Abrasive water jet machine tools are suddenly being a hit in the market since they are quick to program and could make money on short runs. They are quick to set up, and offer quick turn-around on the machine. They complement existing tools used for either primary or secondary operations and could make parts quickly out of virtually out of any material. One of the major advantage is that they donot heat the material. All sorts of intricate shapes are easy to make. They turns to be a money making machine.

So ultimately a machine shop without a water jet , is like a carpenter with out a hammer. Sure the carpenter can use the back of his crow bar to hammer in nails, but there is a better way. It is important to understand that abrasive jets are not the same thing as the water jet although they are nearly the same. Water Jet technology has been around since the early 1970s or so, and abrasive jets extended the concept about ten years later. Both technology use the principle of pressuring water to extremely high pressure, and allowing the water to escape through opening typically called the orifice or jewel. Water jets use the beam of water exiting the orifice to cut soft stuffs like candy bars, but are not effective for cutting harder materials. The inlet water is typically pressurized between 20000 and 60000 Pounds Per Square Inch (PSI). This is forced through a tiny wall in the jewel which is typically .007” to .015” diameter (0.18 to0.4 mm) . This creates a vary high velocity beam of water. Abrasive jets use the same beam of water to accelerate abrasive particles to speeds fast enough to cut through much faster material.

Aerospace Flywheel Development

Added on: March 3rd, 2012 by Afsal Meerankutty No Comments

Presently, energy storage on the Space Station and satellites is accomplished using chemical batteries; most commonly nickel hydrogen or nickel cadmium. A flywheel energy storage system is an alternative technology that is being considered for future space missions. Flywheels offer the advantage of a longer lifetime, higher efficiency and a greater depth of discharge than batteries. A flywheel energy storage system is being considered as a replacement for the traditional electrochemical battery system in spacecraft electrical power systems. The flywheel system is expected to improve both the depth of discharge and working life by a factor of 3 compared with its battery counterpart. Although flywheels have always been used in spacecraft navigation and guidance systems, their use for energy storage is new. However, the two functions can easily be combined into a single system. Several advanced technologies must be demonstrated for the flywheel energy storage system to be a viable option for future space missions. These include high strength composite materials, highly efficient high speed motor operation and control, and magnetic bearing levitation.

High Speed Trains

Added on: March 3rd, 2012 by Afsal Meerankutty No Comments

When English inventor Richard Trevithick introduced the steam locomotive on 21 February 1804 in Wales, it achieved a speed of 8 km/h (5 mph). In 1815, Englishman George Stephenson built the world’s first workable steam locomotive. In 1825, he introduced the first passenger train, which steamed along at 25 km/h (16 mph). Today, trains can fly down the tracks at 500 km/h (311 mph). And fly they do, not touching the tracks.

There is no defined speed at which you can call a train a high speed train but trains running at and above150 km/h are called High Speed Trains.

Vehicle Skid Control

Added on: March 2nd, 2012 by Afsal Meerankutty 1 Comment

Vehicle skid can be defined as the loss of traction between a vehicle’s tyres and the road surface due to the forces acting on the vehicle. Most skids are caused by driver error, although only about 15% of accidents are the direct result of a vehicle skidding. Skids occurring in other accidents are usually the result of last minute action, by the driver, when faced with a crisis ahead rather than actually causing an accident. Skids can occur both in the dry and wet as well as icy conditions, however, the chances of losing control and having an accident increases by 50% in the wet. The most common type of skid we will be confronted with is when the rear end of the car slides out, causing an oversteer or when the front of the car plows toward the outside of a turn without following the curve of the turn causing an understeer. Usually, oversteer occurs as a result of going into a corner too fast or incorrectly hitting a slick area, causing the rear wheels to oversteer. A third skid called the four wheel skid can also occur, where all the four wheels lock up and the vehicle slides in the direction where the forward momentum is carrying it, with no directional control.

To counter these skids and to prevent accidents from happening, Vehicle Skid Control (VSC) is incorporated in the vehicle. Vehicle Skid Control (VSC) takes the safety aspects of the driver and the vehicle to the next level. It comes under the category of “Passive Technology”, which helps you to avoid a crash. Vehicle Skid Control (VSC) senses the onset of traction loss and helps the driver stay on track. This is achieved via the system’s ability to reduce engine power and to control the brake actuator. VSC helps the driver maintain vehicle traction under demanding conditions by detecting and helping to correct the wheel spin. VSC uses a variety of sensor input to determine if the car is losing traction, then applies the brakes to individual wheels to help correct for discrepancies. The system will also back off the throttle to reduce power. VSC integrates traction control to limit rear wheelspin on slippery surfaces. The VSC system electronically monitors speed and direction, and compares the vehicle’s direction of travel with the driver’s steering, acceleration and braking input. VSC can help the driver compensate for loss of lateral traction, which can cause skids and loss of vehicle control.

Wavelet Video Processing Technology

Added on: March 2nd, 2012 by Afsal Meerankutty 1 Comment

The biggest obstacle to the multimedia revolution is digital obesity. This is the blot that occurs when pictures, sound and video are converted from their natural analog form into computer language for manipulation or transmission. In the present explosion of high quality data, the need to compress it with less distortion of data is the need of the hour. Compression lowers the cost of storage and transmission by packing data into a smaller space.

One of the hottest areas of advanced form of compression is wavelet compression. Wavelet Video Processing Technology offers some alluring features, including high compression ratios and eye pleasing enlargements.

Transparent Electronics

Added on: March 1st, 2012 by Afsal Meerankutty No Comments

Transparent electronics is an emerging science and technology field focused on producing ‘invisible’ electronic circuitry and opto-electronic devices. Applications include consumer electronics, new energy sources, and transportation; for example, automobile windshields could transmit visual information to the driver. Glass in almost any setting could also double as an electronic device, possibly improving security systems or offering transparent displays. In a similar vein, windows could be used to produce electrical power. Other civilian and military applications in this research field include realtime wearable displays. As for conventional Si/III–V-based electronics, the basic device structure is based on semiconductor junctions and transistors. However, the device building block materials, the semiconductor, the electric contacts, and the dielectric/passivation layers, must now be transparent in the visible –a true challenge! Therefore, the first scientific goal of this technology must be to discover, understand, and implement transparent high-performance electronic materials. The second goal is their implementation and evaluation in transistor and circuit structures. The third goal relates to achieving application-specific properties since transistor performance and materials property requirements vary, depending on the final product device specifications. Consequently, to enable this revolutionary technology requires bringing together expertise from various pure and applied sciences, including materials science, chemistry, physics, electrical/electronic/circuit engineering, and display science.

Plastic Memory

Added on: March 1st, 2012 by Afsal Meerankutty 1 Comment

A conducting plastic has been used to create a new memory technology which has the potential to store a mega bit of data in a millimeter- square device-10 times denser than current magnetic memories. This device is cheap and fast, but cannot be rewritten, so would only be suitable for permanent storage.

The device sandwiches a blob of a conducting polymer called PEDOT and a silicon diode between perpendicular wires.

The key to the new technology was discovered by passing high current through PEDOT (Polyethylenedioxythiophene) which turns it into an insulator, rather like blowing a fuse .The polymer has two possible states- conductor and insulator, that form the one and zero, necessary to store digital data.

However tuning the polymer into an insulator involves a permanent chemical change, meaning the memory can only be written once.

Phishing

Added on: February 28th, 2012 by Afsal Meerankutty 1 Comment

In the field of computer security, phishing is the criminally fraudulent process of attempting to acquire sensitive information such as usernames, passwords and credit card details, by masquerading as a trustworthy entity in an electronic attempting to acquire sensitive information such as usernames, passwords and credit card details, by masquerading as a trustworthy entity in an electronic communication. Phishing is a fraudulent e-mail that attempts to get you to divulge personal data that can then be used for illegitimate purposes.

There are many variations on this scheme. It is possible to Phish for other information in additions to usernames and passwords such as credit card numbers, bank account numbers, social security numbers and mothers’ maiden names. Phishing presents direct risks through the use of stolen credentials and indirect risk to institutions that conduct business on line through erosion of customer confidence. The damage caused by phishing ranges from denial of access to e-mail to substantial financial loss.

This report also concerned with anti-phishing techniques. There are several different techniques to combat phishing, including legislation and technology created specifically to protect against phishing. No single technology will completely stop phishing. However a combination of good organization and practice, proper application of current technologies and improvements in security technology has the potential to drastically reduce the prevalence of phishing and the losses suffered from it. Anti-phishing software and computer programs are designed to prevent the occurrence of phishing and trespassing on confidential information. Anti-phishing software is designed to track websites and monitor activity; any suspicious behavior can be automatically reported and even reviewed as a report after a period of time.
This also includes detecting phishing attacks, how to prevent and avoid being scammed, how to react when you suspect or reveal a phishing attack and what you can do to help stop phishers.

3D Television

Added on: February 28th, 2012 by Afsal Meerankutty No Comments

Three-dimensional TV is expected to be the next revolution in the TV history. They implemented a 3D TV prototype system with real-time acquisition transmission, & 3D display of dynamic scenes. They developed a distributed scalable architecture to manage the high computation & bandwidth demands. 3D display shows high-resolution stereoscopic color images for multiple viewpoints without special glasses. This is first real time end-to-end 3D TV system with enough views & resolution to provide a truly immersive 3D experience.Japan plans to make this futuristic television a commercial reality by 2020 as part of abroad national project that will bring together researchers from the government, technology companies and academia. The targeted “virtual reality” television would allow people to view high definition images in 3D from any angle, in addition to being able to touch and smell the objects being projected upwards from a screen to the floor.

Electrodynamic Tether

Added on: February 28th, 2012 by Afsal Meerankutty No Comments

Electrodynamic (ED) tether is a long conducting wire extended from spacecraft. It has a strong potential for providing propellant less propulsion to spacecraft in low earth orbit. An electrodynamic Tether uses the same principle as electric motor in toys, appliances and computer disk drives. It works as a thruster, because a magnetic field exerts a force on a current carrying wire. The magnetic field is supplied by the earth. By properly controlled the forces generated by this “electrodynamic” tether can be used to pull or push a spacecraft to act as brake or a booster. NASA plans to lasso energy from Earth’s atmosphere with a tether act as part of first demonstration of a propellant-free space propulsion system, potentially leading to a revolutionary space transportation system. Working with Earth’s magnetic field would benefit a number of spacecraft including the International Space Station. Tether propulsion requires no fuel. Is completely reusable and environmentally clean and provides all these features at low cost.

HVAC

Added on: February 28th, 2012 by Afsal Meerankutty No Comments

Wireless transmission of electromagnetic radiation (communication signals) has become a popular method of transmitting RF signals such as cordless, wireless and cellular telephone signals, paper signals, two way radio signals, video conferencing signals and LAN signals indoors.

Indoor wireless transmission has the advantage that building in which transmission is taking place does not have to be filled with wires or cables that are equipped to carry a multitude of signals. Wires and signals are costly to install and may require expensive upgrades when their capacity is exceeded or when new technologies require different types of wires and cables than those already installed.

Traditional indoor wireless communication systems transmit and receive signals through the use of a network of transmitters, receivers and antennas that are placed throughout the interior of a building. Devices must be located such that signals must not be lost or signal strength may not get attenuated. Again a change in the existing architecture also affects the wireless transmission. Another challenge related to installation of wireless networks in buildings is the need to predict the RF propagation and coverage in the presence of complex combinations of shapes and materials in the buildings.

In general, the attenuation in buildings is larger than that in free space, requiring more cells and higher power to obtain wider coverage. Despite of all these, placement of antennas, receivers and antennas in an indoor environment is largely a process of trial and error. Hence there is need for a method and a system for efficiently transmitting RF and microwave signals indoors without having to install an extensive system of wires and cables inside the buildings.

This paper suggests an alternative method of distributing electromagnetic signals in buildings by the recognition that every building is equipped with an RF wave guide distribution system, the HVAC ducts. The use of HVAC ducts is also amenable to a systematic design procedure but should be significantly less expensive than other approaches since existing infrastructure is used and RF is distributed more efficiently.

Dense Wavelength Division Multiplexing

Added on: February 28th, 2012 by Afsal Meerankutty No Comments

There has always been a technological talent to fulfill the constant need to extent the capacity of communication channel and DWDM (Dense Wavelength Division Multiplexing) has dramatically brought about an explosive enlargement of the capacity of fiber network, solving the problem of increasing traffic demand most economically.

DWDM is a technique that makes possible transmission of multiple discrete wavelengths carrying data rate as high as fiber plant allows over a single fiber unidirectionally or bidirectionally.

It is an advanced type of WDM in which the optical channels are more closely spaced than WDM.

Emission Control Techniques

Added on: February 28th, 2012 by Afsal Meerankutty No Comments

The need to control the emissions from automobiles gave rise to the computerization of the automobile. Hydrocarbons, carbon monoxide and oxides of nitrogen are created during the combustion process and are emitted into the atmosphere from the tail pipe. There are also hydrocarbons emitted as a result of vaporization of gasoline and from the crankcase of the automobile. The clean air act of 1977 set limits as to the amount of each of these pollutants that could be emitted from an automobile. The manufacturers answer was the addition of certain pollution control devices and the creation of a self-adjusting engine. 1981 saw the first of these self-adjusting engines. They were called feedback fuel control systems. An oxygen sensor was installed in the exhaust system and would measure the fuel content of the exhaust stream. It then would send a signal to a microprocessor, which would analyze the reading and operate a fuel mixture or air mixture device to create the proper air/fuel ratio. As computer systems progressed, they were able to adjust ignition spark timing as well as operate the other emission controls that were installed on the vehicle. The computer is also capable of monitoring and diagnosing itself. If a fault is seen, the computer will alert the vehicle operator by illuminating a malfunction indicator lamp. The computer will at the same time record the fault in it’s memory, so that a technician can at a later date retrieve that fault in the form of a code which will help them determine the proper repair. Some of the more popular emission control devices installed on the automobile are: EGR valve, Catalytic Converter, Air Pump, PCV Valve, Charcol Canitiser etc.

Like SI engine CI engines are also major source of emission. Several experiments and technologies are developed and a lot of experiments are going on to reduce emission from CI engine. The main constituents causing diesel emission are smoke, soot, oxides of nitrogen, hydrocarbons, carbon monoxides etc. Unlike SI engine, emission produced by carbon monoxide and hydrocarbon in CI engine is small. Inorder to give better engine performance the emission must be reduce to a great extend. The emission can be reduced by using smoke suppressant additives, using particulate traps, SCR (Selective Catalytic Reduction) etc.

3D Password

Added on: February 28th, 2012 by Afsal Meerankutty 3 Comments

Normally the authentication scheme the user undergoes is particularly very lenient or very strict. Throughout the years authentication has been a very interesting approach. With all the means of technology developing, it can be very easy for ‘others’ to fabricate or to steal identity or to hack someone’s password. Therefore many algorithms have come up each with an interesting approach toward calculation of a secret key. The algorithms are such based to pick a random number in the range of 10^6 and therefore the possibilities of the sane number coming is rare.

Users nowadays are provided with major password stereotypes such as textual passwords, biometric scanning, tokens or cards (such as an ATM) etc .Mostly textual passwords follow an encryption algorithm as mentioned above. Biometric scanning is your “natural” signature and Cards or Tokens prove your validity. But some people hate the fact to carry around their cards, some refuse to undergo strong IR exposure to their retinas(Biometric scanning).Mostly textual passwords, nowadays, are kept very simple say a word from the dictionary or their pet names, girlfriends etc. Years back Klein performed such tests and he could crack 10-15 passwords per day. Now with the technology change, fast processors and many tools on the Internet this has become a Child’s Play.

Therefore we present our idea, the 3D passwords which are more customizable and very interesting way of authentication. Now the passwords are based on the fact of Human memory. Generally simple passwords are set so as to quickly recall them. The human memory, in our scheme has to undergo the facts of Recognition, Recalling, Biometrics or Token based authentication. Once implemented and you log in to a secure site, the 3D password GUI opens up. This is an additional textual password which the user can simply put. Once he goes through the first authentication, a 3D virtual room will open on the screen. In our case, let’s say a virtual garage. Now in a day to day garage one will find all sorts of tools, equipments, etc.each of them having unique properties. The user will then interact with these properties accordingly. Each object in the 3D space, can be moved around in an (x,y,z) plane. That’s the moving attribute of each object. This property is common to all the objects in the space. Suppose a user logs in and enters the garage. He sees and picks a screw-driver (initial position in xyz coordinates (5, 5, 5)) and moves it 5 places to his right (in XY plane i.e. (10, 5, 5).That can be identified as an authentication. Only the true user understands and recognizes the object which he has to choose among many. This is the Recall and Recognition part of human memory coming into play. Interestingly, a password can be set as approaching a radio and setting its frequency to number only the user knows. Security can be enhanced by the fact of including Cards and Biometric scanner as input. There can be levels of authentication a user can undergo.

Air Brake System

Added on: February 28th, 2012 by Afsal Meerankutty No Comments

Air brake system consists of the following components:

Compressor:
The compressor generates the compressed air for the whole system.

Reservoir:
The compressed air from the compressor is stored in the reservoir.

Unloader Valve:
This maintains pressure in the reservoir at 8bar.When the pressure goes above 8 bar it immediately releases the pressurized air to bring the system to 8-bar pressure.

Air Dryer:
This removes the moisture from the atmospheric air and prevents corrosion of the reservoir.

System Protection Valve:
This valve takes care of the whole system. Air from the compressor is given to various channels only through this valve. This valve operates only at 4-bar pressure and once the system pressure goes below 4-bar valve immediately becomes inactive and applies the parking brake to ensure safety.

Dual Brake Valve:
When the driver applies brakes, depending upon the pedal force this valve releases air from one side to another.

Graduated Hand Control Valve:
This valve takes care of the parking brakes.
Brake Chamber:
The air from the reservoir flows through various valves and finally reaches the brake chamber which activates the S-cam in the brake shoe to apply the brakes in the front

Actuators:
The air from the reservoir flows through various valves and finally reaches the brake chamber, which activates the S-cam in the brake shoe to apply the brakes in the rear.

Application of Shunt Power Filter

Added on: February 28th, 2012 by Afsal Meerankutty No Comments

In this paper, the implementation of a shunt active power filter with a small series reactor for a three-phase system is presented. The system consists of multiple non-linear loads, which are a combination of harmonic current sources and harmonic voltage sources, with significant unbalanced components. The filter consists of a three-phase current-controlled voltage source inverter (CC-VSI) with a filter inductance at the ac output and a dc-bus capacitor. The CC-VSI is operated to directly control the ac grid current to be sinusoidal and in phase with the grid voltage. The switching is controlled using ramp time current control, which is based on the concept of zero average current error. The simulation results indicate that the filter along with the series reactor is able to handle predominantly the harmonic voltage sources, as well as the unbalance, so that the grid currents are sinusoidal, in phase with the grid voltages and symmetrical.

Hyper Transport Technology

Added on: February 28th, 2012 by Afsal Meerankutty No Comments

Hyper Transport technology is a very fast, low latency, point-to-point link used for inter-connecting integrated circuits on board. Hyper Transport, previously codenamed as Lightning Data Transport (LDT), provides the bandwidth and flexibility critical for today’s networking and computing platforms while retaining the fundamental programming model of PCI. Hyper Transport was invented by AMD and perfected with the help of several partners throughout the industry.

Hyper Transport was designed to support both CPU-to-CPU communications as well as CPU-to-I/O transfers, thus, it features very low latency. It provides up to 22.4 Gigabyte/second aggregate CPU to I/O or CPU to CPU bandwidth in a highly efficient chip-to-chip technology that replaces existing complex multi-level buses .Using enhanced 1.2 volt LVDS signaling reduces signal noise, using non-multiplexed lines cuts down on signal activity and using dual-data rate clocks lowers clock rates while increasing data throughput. . It employs a packet-based data protocol to eliminate many sideband (control and command) signals and supports asymmetric, variable width data paths.

New specifications are backward compatible with previous generations of specification, extending the investment made in one generation of Hyper Transport-enabled device to future generations. Hyper Transport devices are PCI software compatible, thus they require little or no software overhead. The technology targets networking, telecommunications, computers and embedded systems and any application where high speed, low latency and scalability are necessary.

Cylinder Deactivation

Added on: February 28th, 2012 by Afsal Meerankutty No Comments

With alternatives to the petrol engine being announced ever so often you could be forgiven for thinking that the old favorite the petrol engine is on its last legs but nothing could be further from the truth and possibilities for developing the petrol engines are endless. One of the most crucial jobs on the agenda is to find ways of reducing fuel consumption, cutting emissions of the green house gas CO2 and also the toxic emissions which threaten air quality. One such fast emerging technology is cylinder deactivation where a number of cylinders are shut down when less is needed to save fuel.

The simple fact is that when you only need small amounts of power such as crawling around town what you really need is a smaller engine. To put it another way an engine performs most efficiently when its working harder so ask it to do the work of an engine half its size and efficiency suffers. Pumping or throttling losses are mostly to blame. Cylinder deactivation is one of the technologies that improve fuel economy, the objective of which is to reduce engine pumping losses under certain vehicle operating conditions.

When a petrol engine is working with the throttle wide open pumping losses are minimal. But at part throttle the engine wastes energy trying to breathe through a restricted airway and the bigger engine, the bigger the problem. Deactivating half the cylinders at part load is much like temporarily fitting a smaller engine.

During World War II, enterprising car owners disconnected a spark plug wire or two in hopes of stretching their precious gasoline ration. Unfortunately, it didn’t improve gas mileage. Nevertheless, Cadillac resurrected the concept out of desperation during the second energy crisis. The “modulated displacement 6.0L V-8- 6-4” introduced in 1981 disabled two, then four cylinders during part-throttle operation to improve the gas mileage of every model in Cadillac’s lineup. A digital dash display reported not only range, average mpg, and instantaneous mpg, but also how many cylinders were operating. Customers enjoyed the mileage boost but not the
side effects. Many of them ordered dealers to cure their Cadillacs of the shakes and stumbles even if that meant disconnecting the modulated-displacement system.

Like wide ties, short skirts and $2-per-gallon gas, snoozing cylinders are back. General Motors, the first to show renewed interest in the idea, calls it Displacement on Demand (DoD). DaimlerChrysler, the first manufacturer to hit the U.S. market with a modern cylinder shut-down system calls its approach Multi- Displacement System (MDS). And Honda, who beat everyone to the punch by equipping Japanese-market Inspire models with cylinder deactivation last year, calls the approach Variable Cylinder Management (VCM)

The motivation is the same as before — improved gas mileage. Disabling cylinders finally makes sense because of the strides achieved in electronic power train controls. According to GM, computing power has been increased 50-fold in the past two decades and the memory available for control algorithms is 100 times greater. This time around, manufacturers expect to disable unnecessary cylinders so seamlessly that the driver never knows what’s happening under the hood.

Camless Engine

Added on: February 28th, 2012 by Afsal Meerankutty No Comments

The cam has been an integral part of the IC engine from its invention. The cam controls the “breathing channels” of the IC engines, that is, the valves through which the fuel air mixture (in SI engines) or air (in CI engines) is supplied and exhaust driven out. Besieged by demands for better fuel economy, more power, and less pollution, motor engineers around the world are pursuing a radical “camless” design that promises to deliver the internal – combustion engine’s biggest efficiency improvement in years. The aim of all this effort is liberation from a constraint that has handcuffed performance since the birth of the internal-combustion engine more than a century ago. Camless engine technology is soon to be a reality for commercial vehicles. In the camless valve train, the valve motion is controlled directly by a valve actuator – there’s no camshaft or connecting mechanisms .Precise electrohydraulic camless valve train controls the valve operations, opening, closing etc.

The seminar looks at the working of the electrohydraulic camless engine, its general features and benefits over conventional engines. The engines powering today’s vehicles, whether they burn gasoline or diesel fuel, rely on a system of valves to admit fuel and air to the cylinders and let exhaust gases escape after combustion. Rotating steel camshafts with precision-machined egg-shaped lobes, or cams, are the hard-tooled “brains” of the system. They push open the valves at the proper time and guide their closure, typically through an arrangement of pushrods, rocker arms, and other hardware. Stiff springs return the valves to their closed position. In an overhead-camshaft engine, a chain or belt driven by the crankshaft turns one or two camshafts located atop the cylinder head.
A single overhead camshaft (SOHC) design uses one camshaft to move rockers that open both inlet and exhaust valves. The double overhead camshaft (DOHC), or twin-cam, setup does away with the rockers and devotes one camshaft to the inlet valves and the other to the exhaust valves.

Darknet

Added on: February 28th, 2012 by Afsal Meerankutty No Comments

This paper outlines a migration path towards universal broadband connectivity, motivated by the design of a wireless store-and-forward communications network.

We argue that the cost of real-time, circuit-switched communications is sufficiently high that it may not be the appropriate starting point for rural connectivity. Based on market data for information and communication technology (ICT) services in rural India, we propose a combination of wireless technology with an asynchronous mode of communications to offer a means of introducing ICTs with:

  • affordability and practicality for end users
  • a sustainable cost structure for operators and investors
  • a smooth migration path to universal broadband

connectivity.

A summary of results and data are given for an operational pilot test of this wireless network in Karnataka, India, beginning in March 2003.
We also briefly discuss the economics and policy considerations for deploying this type of network in the context of rural connectivity.

Adaptive Cruise Control

Added on: February 28th, 2012 by Afsal Meerankutty No Comments

Mentally, driving is a highly demanding activity – a driver must maintain a high level of concentration for long periods and be ready to react within a split second to changing situations. In particular, drivers must constantly assess the distance and relative speed of vehicles in front and adjust their own speed accordingly.
Those tasks can now be performed by Adaptive Cruise Control (ACC) system, which is an extension of the conventional cruise control system.

Like a conventional cruise control system, ACC keeps the vehicle at a set constant speed. The significant difference, however, is that if a car with ACC is confronted with a slower moving vehicle ahead, it is automatically slowed down and then follows the slower vehicle at a set distance. Once the road ahead is clear again, the ACC accelerates the car back to the previous set cruising speed. In that way, ACC integrates a vehicle harmoniously into the traffic flow.

Common Synthetic Plastics

Added on: February 28th, 2012 by Afsal Meerankutty No Comments

Plastic molecules are made of long chains of repeating units called monomers. The atoms that make up a plastic’s monomers and the arrangement of the monomers within the molecule both determine many of the plastic’s properties. Plastics are one of the classification of polymers .If a polymer is shaped into hard and tough utility articles by the application of heat and pressure ,it is used as “plastic”.

Synthetic polymers are often referred to as “plastics”, such as the well-known polyethylene and nylon. However, most of them can be classified in at least three main categories: thermoplastics, thermosets and elastomers.

Man-made polymers are used in a bewildering array of applications: food packaging, films, fibers, tubing, pipes, etc. The personal care industry also uses polymers to aid in texture of products, binding etc.

4G Wireless Technology

Added on: February 27th, 2012 by Afsal Meerankutty 1 Comment

Pick up any newspaper today and it is a safe bet that you will find an article somewhere relating to mobile communications. If it is not in the technology section it will almost certainly be in the business section and relate to the increasing share prices of operators or equipment manufacturers, or acquisitions and take-overs thereof. Such is the pervasiveness of mobile communications that it is affecting virtually everyone’s life and has become a major political topic and a significant contributor to national gross domestic product (GDP).

The major driver to change in the mobile area in the last ten years has been the massive enabling implications of digital technology, both in digital signal processing and in service provision. The equivalent driver now, and in the next five years, will be the all pervasiveness of software in both networks and terminals. The digital revolution is well underway and we stand at the doorway to the software revolution. Accompanying these changes are societal developments involving the extensions in the use of mobiles. Starting out from speech-dominated services we are now experiencing massive growth in applications involving SMS (Short Message Service) together with the start of Internet applications using WAP (Wireless Application Protocol) and i-mode. The mobile phone has not only followed the watch, the calculator and the organiser as an essential personal accessory but has subsumed all of them. With the new Internet extensions it will also lead to a convergence of the PC, hi-fl and television and provide mobility to facilities previously only available on one network.

Gasoline Direct Injection

Added on: February 27th, 2012 by Afsal Meerankutty No Comments

In recent years, legislative and market requirements have driven the need to reduce fuel consumption while meeting increasingly stringent exhaust emissions. This trend has dictated increasing complexity in automotive engines and new approaches to engine design. A key research objective for the automotive engineering community has been the potential combination of gasoline-engine specific power with diesel-like engine efficiency in a cost-competitive, production-feasible power train. One promising engine development route for achieving these goals is the potential application of lean burn direct injection (DI) for gasoline engines. In carburetors the fuel is sucked due to the pressure difference caused by the incoming air. This will affect the functioning of the carburetor when density changes in air are appreciable. There was a brief period of electronically controlled carburetor, but it was abandoned due to its complex nature. On the other hand in fuel injection the fuel is injected into the air.

Fluid Amplifiers

Added on: February 27th, 2012 by Afsal Meerankutty No Comments

When one stream of fluid is permitted to impinge on another, direction of flow changes and the tendency of a fluid to strike the wall also changes. This concept gives rise to a new engineering system known as ‘fluidics. The term fluidics is the contraction of the words fluid and logic. Tremendous progress has been made in last twenty years in design and application of fluidic devices.

The current interest in fluidics for logic and control function was launched by the U.S Army’sHarry Diamond Laboratories. In March 1960 this laboratories invented the first fluid amplifier. This work was later expanded through a series of research and development contracts and the work reported in this section was sponsored by the U.S Airforce.The environmental capability of fluidic devices permits direct measurement of required control parameters within the engine.

These devices are more economical, faster and smaller than hydraulic control elements employing moving parts such as valves etc. Fluid devices have no moving parts hence they are more reliable and have long life. Fluidics is now offering an alternative to some other devices being operated with the help of electronics. It can operate where electronic devices are unsatisfactory, such as high temperature, humidity, in presence of severe vibrations, in high fire risk or where ionizing radiations are presents.

Electro Chemical Machining

Added on: February 26th, 2012 by Afsal Meerankutty No Comments

Electro chemical machining (ECM) is the controlled removal of metal by anodic dissolution in an electrolytic medium in which the work piece is the anode & the tool is the cathode.
Working: Two electrodes are placed at a distance of about 0.5mm & immersed in an electrolyte, which is a solution of sodium chloride. When an electrical potential of about 20V is applied between the electrodes, the ions existing in the electrodes migrate toward the electrodes.

Positively charged ions are attracted towards the cathode & negatively charged towards the anode. This initiates the flow of current in the electrolyte. The electrolysis process that takes place at the cathode liberates hydroxyl ions & free hydrogen. The hydroxyl ion combines with the metal ions of anode to form insoluble metal hydroxides &the material is thus removed from the anode. This process continues and the tool reproduces its shape in the work piece (anode). The high current densities promote rapid generation of metal hydroxides and gas bubble in the small spacing between the electrodes. These become a barrier to the electrolyzing current after a few seconds. To maintain a continuous high density current, these products have to be removed continuously. This is achieved by circulating the electrolyte at high velocity through the gap between the electrodes. It is also to be noted that the machining gap size increases. Therefore to maintain a constant gap the cathode should be advanced towards the anode at the same rate at which the material is removed.

Chameleon Chips

Added on: February 26th, 2012 by Afsal Meerankutty No Comments

Today’s microprocessors sport a general-purpose design which has its own advantages and disadvantages.

  • Adv: One chip can run a range of programs. That’s why you don’t need separate computers for different jobs, such as crunching spreadsheets or editing digital photos
  • Disadv: For any one application, much of the chip’s circuitry isn’t needed, and the presence of those “wasted” circuits slows things down.

Suppose, instead, that the chip’s circuits could be tailored specifically for the problem at hand–say, computer-aided design–and then rewired, on the fly, when you loaded a tax-preparation program. One set of chips, little bigger than a credit card, could do almost anything, even changing into a wireless phone. The market for such versatile marvels would be huge, and would translate into lower costs for users.

So computer scientists are hatching a novel concept that could increase number-crunching power–and trim costs as well. Call it the chameleon chip.

Chameleon chips would be an extension of what can already be done with field-programmable gate arrays (FPGAS).

An FPGA is covered with a grid of wires. At each crossover, there’s a switch that can be semipermanently opened or closed by sending it a special signal. Usually the chip must first be inserted in a little box that sends the programming signals. But now, labs in Europe, Japan, and the U.S. are developing techniques to rewire FPGA-like chips anytime–and even software that can map out circuitry that’s optimized for specific problems.

The chips still won’t change colors. But they may well color the way we use computers in years to come. it is a fusion between custom integrated circuits and programmable logic.in the case when we are doing highly performance oriented tasks custom chips that do one or two things spectacularly rather than lot of things averagely is used. Now using field programmed chips we have chips that can be rewired in an instant. Thus the benefits of customization can be brought to the mass market.

A reconfigurable processor is a microprocessor with erasable hardware that can rewire itself dynamically. This allows the chip to adapt effectively to the programming tasks demanded by the particular software they are interfacing with at any given time. Ideally, the reconfigurable processor can transform itself from a video chip to a central processing unit (cpu) to a graphics chip, for example, all optimized to allow applications to run at the highest possible speed. The new chips can be called a “chip on demand.” In practical terms, this ability can translate to immense flexibility in terms of device functions. For example, a single device could serve as both a camera and a tape recorder (among numerous other possibilities): you would simply download the desired software and the processor would reconfigure itself to optimize performance for that function.

Reconfigurable processors, competing in the market with traditional hard-wired chips and several types of programmable microprocessors. Programmable chips have been in existence for over ten years. Digital signal processors (DSPs), for example, are high-performance programmable chips used in cell phones, automobiles, and various types of music players.

Another version, programmable logic chips are equipped with arrays of memory cells that can be programmed to perform hardware functions using software tools. These are more flexible than the specialized DSP chips but also slower and more expensive. Hard-wired chips are the oldest, cheapest, and fastest – but also the least flexible – of all the options.

Space Robotics

Added on: February 26th, 2012 by Afsal Meerankutty No Comments

Robot is a system with a mechanical body, using computer as its brain. Integrating the sensors and actuators built into the mechanical body, the motions are realised with the computer software to execute the desired task. Robots are more flexible in terms of ability to perform new tasks or to carry out complex sequence of motion than other categories of automated manufacturing equipment. Today there is lot of interest in this field and a separate branch of technology ‘robotics’ has emerged. It is concerned with all problems of robot design, development and applications. The technology to substitute or subsidise the manned activities in space is called space robotics. Various applications of space robots are the inspection of a defective satellite, its repair, or the construction of a space station and supply goods to this station and its retrieval etc. With the over lap of knowledge of kinematics, dynamics and control and progress in fundamental technologies it is about to become possible to design and develop the advanced robotics systems. And this will throw open the doors to explore and experience the universe and bring countless changes for the better in the ways we live.