A real-time operating system (RTOS) is an operating system (OS) intended to serve real-time application process data as it comes in, typically without buffering delays. Processing time requirements (including any OS delay) are measured in tenths of seconds or shorter.
A key characteristic of an RTOS is the level of its consistency concerning the amount of time it takes to accept and complete an application's task; the variability is jitter. A hard real-time operating system has less jitter than a soft real-time operating system. The chief design goal is not high throughput, but rather a guarantee of a soft or hard performance category. An RTOS that can usually or generally meet a deadline is a soft real-time OS, but if it can meet a deadline deterministically it is a hard real-time OS.
An RTOS has an advanced algorithm for scheduling. Scheduler flexibility enables a wider, computer-system orchestration of process priorities, but a real-time OS is more frequently dedicated to a narrow set of applications. Key factors in a real-time OS are minimal interrupt latency and minimal thread switching latency; a real-time OS is valued more for how quickly or how predictably it can respond than for the amount of work it can perform in a given period of time.
- 1 Design philosophies
- 2 Scheduling
- 2.1 Algorithms
- 3 Intertask communication and resource sharing
- 3.1 Temporarily masking/disabling interrupts
- 3.2 Binary semaphores
- 3.3 Message passing
- 4 Interrupt handlers and the scheduler
- 5 Memory allocation
- 6 Examples
- 7 See also
- 8 References
The most common designs are:
- Event-driven which switches tasks only when an event of higher priority needs servicing, called preemptive priority, or priority scheduling.
- Time-sharing designs switch tasks on a regular clocked interrupt, and on events, called round robin.
Time sharing designs switch tasks more often than strictly needed, but give smoother multitasking, giving the illusion that a process or user has sole use of a machine.
Early CPU designs needed many cycles to switch tasks, during which the CPU could do nothing else useful. For example, with a 20 MHz 68000 processor (typical of the late 1980s), task switch times are roughly 20 microseconds. (In contrast, a 100 MHz ARM CPU (from 2008) switches in less than 3 microseconds.) Because of this, early OSes tried to minimize wasting CPU time by avoiding unnecessary task switching.
In typical designs, a task has three states:
- Running (executing on the CPU);
- Ready (ready to be executed);
- Blocked (waiting for an event, I/O for example).
Most tasks are blocked or ready most of the time because generally only one task can run at a time per CPU. The number of items in the ready queue can vary greatly, depending on the number of tasks the system needs to perform and the type of scheduler that the system uses. On simpler non-preemptive but still multitasking systems, a task has to give up its time on the CPU to other tasks, which can cause the ready queue to have a greater number of overall tasks in the ready to be executed state (resource starvation).
Usually the data structure of the ready list in the scheduler is designed to minimize the worst-case length of time spent in the scheduler's critical section, during which preemption is inhibited, and, in some cases, all interrupts are disabled. But the choice of data structure depends also on the maximum number of tasks that can be on the ready list.
If there are never more than a few tasks on the ready list, then a doubly linked list of ready tasks is likely optimal. If the ready list usually contains only a few tasks but occasionally contains more, then the list should be sorted by priority. That way, finding the highest priority task to run does not require iterating through the entire list. Inserting a task then requires walking the ready list until reaching either the end of the list, or a task of lower priority than that of the task being inserted.
Care must be taken not to inhibit preemption during this search. Longer critical sections should be divided into small pieces. If an interrupt occurs that makes a high priority task ready during the insertion of a low priority task, that high priority task can be inserted and run immediately before the low priority task is inserted.
The critical response time, sometimes called the flyback time, is the time it takes to queue a new ready task and restore the state of the highest priority task to running. In a well-designed RTOS, readying a new task will take 3 to 20 instructions per ready-queue entry, and restoration of the highest-priority ready task will take 5 to 30 instructions.
In more advanced systems, real-time tasks share computing resources with many non-real-time tasks, and the ready list can be arbitrarily long. In such systems, a scheduler ready list implemented as a linked list would be inadequate.
Some commonly used RTOS scheduling algorithms are:
- Cooperative scheduling
- Preemptive scheduling
- Rate-monotonic scheduling
- Round-robin scheduling
- Fixed priority pre-emptive scheduling, an implementation of preemptive time slicing
- Fixed-Priority Scheduling with Deferred Preemption
- Fixed-Priority Non-preemptive Scheduling
- Critical section preemptive scheduling
- Static time scheduling
- Earliest Deadline First approach
- Stochastic digraphs with multi-threaded graph traversal
Intertask communication and resource sharing
Multitasking systems must manage sharing data and hardware resources among multiple tasks. It is usually "unsafe" for two tasks to access the same specific data or hardware resource simultaneously. "Unsafe" means the results are inconsistent or unpredictable. There are three common approaches to resolve this problem:
Temporarily masking/disabling interrupts
General-purpose operating systems usually do not allow user programs to mask (disable) interrupts, because the user program could control the CPU for as long as it wishes. Some modern CPUs don't allow user mode code to disable interrupts as such control is considered a key operating system resource. Many embedded systems and RTOSs, however, allow the application itself to run in kernel mode for greater system call efficiency and also to permit the application to have greater control of the operating environment without requiring OS intervention.
On single-processor systems, if the application runs in kernel mode and can mask interrupts, this method is the solution with the lowest overhead to prevent simultaneous access to a shared resource. While interrupts are masked and the current task does not make a blocking OS call, then the current task has exclusive use of the CPU since no other task or interrupt can take control, so the critical section is protected. When the task exits its critical section, it must unmask interrupts; pending interrupts, if any, will then execute. Temporarily masking interrupts should only be done when the longest path through the critical section is shorter than the desired maximum interrupt latency. Typically this method of protection is used only when the critical section is just a few instructions and contains no loops. This method is ideal for protecting hardware bit-mapped registers when the bits are controlled by different tasks.
When the shared resource must be reserved without blocking all other tasks (such as waiting for Flash memory to be written), it is better to use mechanisms also available on general-purpose operating systems, such as semaphores and OS-supervised interprocess messaging. Such mechanisms involve system calls, and usually invoke the OS's dispatcher code on exit, so they typically take hundreds of CPU instructions to execute, while masking interrupts may take as few as one instruction on some processors.
A binary semaphore is either locked or unlocked. When it is locked, tasks must wait for the semaphore to unlock. A binary semaphore is therefore equivalent to a mutex. Typically a task will set a timeout on its wait for a semaphore. There are several well-known problems with semaphore based designs such as priority inversion and deadlocks.
In priority inversion a high priority task waits because a low priority task has a semaphore, but the lower priority task is not given CPU time to finish its work. A typical solution is to have the task that owns a semaphore run at, or 'inherit,' the priority of the highest waiting task. But this simple approach fails when there are multiple levels of waiting: task A waits for a binary semaphore locked by task B, which waits for a binary semaphore locked by task C. Handling multiple levels of inheritance without introducing instability in cycles is complex and problematic.
In a deadlock, two or more tasks lock semaphores without timeouts and then wait forever for the other task's semaphore, creating a cyclic dependency. The simplest deadlock scenario occurs when two tasks alternately lock two semaphores, but in the opposite order. Deadlock is prevented by careful design or by having floored semaphores, which pass control of a semaphore to the higher priority task on defined conditions.
The other approach to resource sharing is for tasks to send messages in an organized message passing scheme. In this paradigm, the resource is managed directly by only one task. When another task wants to interrogate or manipulate the resource, it sends a message to the managing task. Although their real-time behavior is less crisp than semaphore systems, simple message-based systems avoid most protocol deadlock hazards, and are generally better-behaved than semaphore systems. However, problems like those of semaphores are possible. Priority inversion can occur when a task is working on a low-priority message and ignores a higher-priority message (or a message originating indirectly from a high priority task) in its incoming message queue. Protocol deadlocks can occur when two or more tasks wait for each other to send response messages.
Interrupt handlers and the scheduler
Since an interrupt handler blocks the highest priority task from running, and since real time operating systems are designed to keep thread latency to a minimum, interrupt handlers are typically kept as short as possible. The interrupt handler defers all interaction with the hardware if possible; typically all that is necessary is to acknowledge or disable the interrupt (so that it won't occur again when the interrupt handler returns) and notify a task that work needs to be done. This can be done by unblocking a driver task through releasing a semaphore, setting a flag or sending a message. A scheduler often provides the ability to unblock a task from interrupt handler context.
An OS maintains catalogues of objects it manages such as threads, mutexes, memory, and so on. Updates to this catalogue must be strictly controlled. For this reason it can be problematic when an interrupt handler calls an OS function while the application is in the act of also doing so. The OS function called from an interrupt handler could find the object database to be in an inconsistent state because of the application's update. There are two major approaches to deal with this problem: the unified architecture and the segmented architecture. RTOSs implementing the unified architecture solve the problem by simply disabling interrupts while the internal catalogue is updated. The downside of this is that interrupt latency increases, potentially losing interrupts. The segmented architecture does not make direct OS calls but delegates the OS related work to a separate handler. This handler runs at a higher priority than any thread but lower than the interrupt handlers. The advantage of this architecture is that it adds very few cycles to interrupt latency. As a result, OSes which implement the segmented architecture are more predictable and can deal with higher interrupt rates compared to the unified architecture.
Memory allocation is more critical in a real-time operating system than in other operating systems.
First, for stability there cannot be memory leaks (memory that is allocated, then unused but never freed). The device should work indefinitely, without ever a need for a reboot. For this reason, dynamic memory allocation is frowned upon. Whenever possible, allocation of all required memory is specified statically at compile time.
Another reason to avoid dynamic memory allocation is memory fragmentation. With frequent allocation and releasing of small chunks of memory, a situation may occur when the memory is divided into several sections, in which case the RTOS can not allocate a large continuous block of memory, although there is enough free memory. Secondly, speed of allocation is important. A standard memory allocation scheme scans a linked list of indeterminate length to find a suitable free memory block, which is unacceptable in an RTOS since memory allocation has to occur within a certain amount of time.
Because mechanical disks have much longer and more unpredictable response times, swapping to disk files is not used for the same reasons as RAM allocation discussed above.
The simple fixed-size-blocks algorithm works quite well for simple embedded systems because of its low overhead.
A common example of an RTOS application is an HDTV receiver and display. It needs to read a digital signal, decode it and display it as the data comes in. Any delay would be noticeable as jerky or pixelated video and/or garbled audio.
According to a 2014 Embedded Market Study, the following RTOSes are among the top 10 operating systems used in the embedded systems market:
- Green Hills Software INTEGRITY
- Wind River VxWorks
- QNX Neutrino
- Micrium µC/OS-II, III
- Windows CE
- TI-RTOS Kernel (previously called DSP/BIOS)
See the comparison of real-time operating systems for a comprehensive list. Also, see the list of operating systems for all types of operating systems.
|The Wikibook Embedded Systems has a page on the topic of: Real-Time Operating Systems|
- Adaptive Partition Scheduler
- Earliest deadline first scheduling
- Interruptible operating system
- Least slack time scheduling
- Rate-monotonic scheduling
- RDOS (disambiguation)
- ROS (disambiguation)
- Synchronous programming language
- Time-triggered system
- Time-utility function
- "Response Time and Jitter".
- Tanenbaum, Andrew (2008). Modern Operating Systems. Upper Saddle River, NJ: Pearson/Prentice Hall. p. 160. ISBN 978-0-13-600663-3.
- "RTOS Concepts".
- "Context switching time". Segger Microcontroller Systems. Retrieved 2009-12-20.
- "RTOS performance comparison on emb4fun.de".
- CS 241, University of Illinois
- "2014 Embedded Market Study" (PDF). EE Live!. UBM. p. 44.
An embedded system is a computer system with a dedicated function within a larger mechanical or electrical system, often with real-time computing constraints. It is embedded as part of a complete device often including hardware and mechanical parts. Embedded systems control many devices in common use today.
Examples of properties typical of embedded computers when compared with general-purpose ones are low power consumption, small size, rugged operating ranges, and low per-unit cost. This comes at the price of limited processing resources, which make them significantly more difficult to program and to interface with. However, by building intelligence mechanisms on the top of the hardware, taking advantage of possible existing sensors and the existence of a network of embedded units, one can both optimally manage available resources at the unit and network levels as well as provide augmented functionalities, well beyond those available. For example, intelligent techniques can be designed to manage power consumption of embedded systems.
Modern embedded systems are often based on microcontrollers (i.e. CPUs with integrated memory or peripheral interfaces) but ordinary microprocessors (using external chips for memory and peripheral interface circuits) are also still common, especially in more complex systems. In either case, the processor(s) used may be types ranging from general purpose to those specialised in certain class of computations, or even custom designed for the application at hand. A common standard class of dedicated processors is the digital signal processor (DSP).
Since the embedded system is dedicated to specific tasks, design engineers can optimize it to reduce the size and cost of the product and increase the reliability and performance. Some embedded systems are mass-produced, benefiting from economies of scale.
Embedded systems range from portable devices such as digital watches and MP3 players, to large stationary installations like traffic lights, factory controllers, and largely complex systems like hybrid vehicles, MRI, and avionics. Complexity varies from low, with a single microcontroller chip, to very high with multiple units, peripherals and networks mounted inside a large chassis or enclosure.
- 1 Varieties
- 2 History
- 3 Characteristics
- 3.1 User interface
- 3.2 Processors in embedded systems
- 3.2.1 Ready made computer boards
- 3.2.2 ASIC and FPGA solutions
- 3.3 Peripherals
- 3.4 Tools
- 3.5 Debugging
- 3.5.1 Tracing
- 3.6 Reliability
- 3.7 High vs low volume
- 4 Embedded software architectures
- 4.1 Simple control loop
- 4.2 Interrupt-controlled system
- 4.3 Cooperative multitasking
- 4.4 Preemptive multitasking or multi-threading
- 4.5 Microkernels and exokernels
- 4.6 Monolithic kernels
- 4.7 Additional software components
- 5 See also
- 6 Notes
- 7 References
- 8 Further reading
- 9 External links
Embedded systems are commonly found in consumer, cooking, industrial, automotive, medical, commercial and military applications.
Telecommunications systems employ numerous embedded systems from telephone switches for the network to cell phones at the end-user. Computer networking uses dedicated routers and network bridges to route data.
Consumer electronics include personal digital assistants (PDAs), mp3 players, mobile phones, videogame consoles, digital cameras, DVD players, GPS receivers, and printers. Household appliances, such as microwave ovens, washing machines and dishwashers, include embedded systems to provide flexibility, efficiency and features. Advanced HVAC systems use networked thermostats to more accurately and efficiently control temperature that can change by time of day and season. Home automation uses wired- and wireless-networking that can be used to control lights, climate, security, audio/visual, surveillance, etc., all of which use embedded devices for sensing and controlling.
Transportation systems from flight to automobiles increasingly use embedded systems. New airplanes contain advanced avionics such as inertial guidance systems and GPS receivers that also have considerable safety requirements. Various electric motors — brushless DC motors, induction motors and DC motors — use electric/electronic motor controllers. Automobiles, electric vehicles, and hybrid vehicles increasingly use embedded systems to maximize efficiency and reduce pollution. Other automotive safety systems include anti-lock braking system (ABS), Electronic Stability Control (ESC/ESP), traction control (TCS) and automatic four-wheel drive.
Medical equipment uses embedded systems for vital signs monitoring, electronic stethoscopes for amplifying sounds, and various medical imaging (PET, SPECT, CT, MRI) for non-invasive internal inspections. Embedded systems within medical equipment are often powered by industrial computers.
Embedded systems are used in transportation, fire safety, safety and security, medical applications and life critical systems, as these systems can be isolated from hacking and thus, be more reliable. For fire safety, the systems can be designed to have greater ability to handle higher temperatures and continue to operate. In dealing with security, the embedded systems can be self-sufficient and be able to deal with cut electrical and communication systems.
A new class of miniature wireless devices called motes are networked wireless sensors. Wireless sensor networking, WSN, makes use of miniaturization made possible by advanced IC design to couple full wireless subsystems to sophisticated sensors, enabling people and companies to measure a myriad of things in the physical world and act on this information through IT monitoring and control systems. These motes are completely self-contained, and will typically run off a battery source for years before the batteries need to be changed or charged.
Embedded Wi-Fi modules provide a simple means of wirelessly enabling any device which communicates via a serial port.
One of the very first recognizably modern embedded systems was the Apollo Guidance Computer, developed by Charles Stark Draper at the MIT Instrumentation Laboratory. At the project's inception, the Apollo guidance computer was considered the riskiest item in the Apollo project as it employed the then newly developed monolithic integrated circuits to reduce the size and weight. An early mass-produced embedded system was the Autonetics D-17 guidance computer for the Minuteman missile, released in 1961. When the Minuteman II went into production in 1966, the D-17 was replaced with a new computer that was the first high-volume use of integrated circuits. This program alone reduced prices on quad nand gate ICs from $1000/each to $3/each, permitting their use in commercial products.
Since these early applications in the 1960s, embedded systems have come down in price and there has been a dramatic rise in processing power and functionality. An early microprocessor for example, the Intel 4004, was designed for calculators and other small systems but still required external memory and support chips. In 1978 National Engineering Manufacturers Association released a "standard" for programmable microcontrollers, including almost any computer-based controllers, such as single board computers, numerical, and event-based controllers.
As the cost of microprocessors and microcontrollers fell it became feasible to replace expensive knob-based analog components such as potentiometers and variable capacitors with up/down buttons or knobs read out by a microprocessor even in consumer products. By the early 1980s, memory, input and output system components had been integrated into the same chip as the processor forming a microcontroller. Microcontrollers find applications where a general-purpose computer would be too costly.
A comparatively low-cost microcontroller may be programmed to fulfill the same role as a large number of separate components. Although in this context an embedded system is usually more complex than a traditional solution, most of the complexity is contained within the microcontroller itself. Very few additional components may be needed and most of the design effort is in the software. Software prototype and test can be quicker compared with the design and construction of a new circuit not using an embedded processor.
Embedded systems are designed to do some specific task, rather than be a general-purpose computer for multiple tasks. Some also have real-time performance constraints that must be met, for reasons such as safety and usability; others may have low or no performance requirements, allowing the system hardware to be simplified to reduce costs.
Embedded systems are not always standalone devices. Many embedded systems consist of small, computerized parts within a larger device that serves a more general purpose. For example, the Gibson Robot Guitar features an embedded system for tuning the strings, but the overall purpose of the Robot Guitar is, of course, to play music. Similarly, an embedded system in an automobile provides a specific function as a subsystem of the car itself.
The program instructions written for embedded systems are referred to as firmware, and are stored in read-only memory or Flash memory chips. They run with limited computer hardware resources: little memory, small or non-existent keyboard or screen.
Embedded systems range from no user interface at all, in systems dedicated only to one task, to complex graphical user interfaces that resemble modern computer desktop operating systems. Simple embedded devices use buttons, LEDs, graphic or character LCDs (HD44780 LCD for example) with a simple menu system.
More sophisticated devices which use a graphical screen with touch sensing or screen-edge buttons provide flexibility while minimizing space used: the meaning of the buttons can change with the screen, and selection involves the natural behavior of pointing at what's desired. Handheld systems often have a screen with a "joystick button" for a pointing device.
Some systems provide user interface remotely with the help of a serial (e.g. RS-232, USB, I²C, etc.) or network (e.g. Ethernet) connection. This approach gives several advantages: extends the capabilities of embedded system, avoids the cost of a display, simplifies BSP and allows one to build a rich user interface on the PC. A good example of this is the combination of an embedded web server running on an embedded device (such as an IP camera) or a network router. The user interface is displayed in a web browser on a PC connected to the device, therefore needing no software to be installed.
Processors in embedded systems
Embedded processors can be broken into two broad categories. Ordinary microprocessors (μP) use separate integrated circuits for memory and peripherals. Microcontrollers (μC) have on-chip peripherals, thus reducing power consumption, size and cost. In contrast to the personal computer market, many different basic CPU architectures are used, since software is custom-developed for an application and is not a commodity product installed by the end user. Both Von Neumann as well as various degrees of Harvard architectures are used. RISC as well as non-RISC processors are found. Word lengths vary from 4-bit to 64-bits and beyond, although the most typical remain 8/16-bit. Most architectures come in a large number of different variants and shapes, many of which are also manufactured by several different companies.
Numerous microcontrollers have been developed for embedded systems use. General-purpose microprocessors are also used in embedded systems, but generally require more support circuitry than microcontrollers.
Ready made computer boards
PC/104 and PC/104+ are examples of standards for ready made computer boards intended for small, low-volume embedded and ruggedized systems, mostly x86-based. These are often physically small compared to a standard PC, although still quite large compared to most simple (8/16-bit) embedded systems. They often use DOS, Linux, NetBSD, or an embedded real-time operating system such as MicroC/OS-II, QNX or VxWorks. Sometimes these boards use non-x86 processors.
In certain applications, where small size or power efficiency are not primary concerns, the components used may be compatible with those used in general purpose x86 personal computers. Boards such as the VIA EPIA range help to bridge the gap by being PC-compatible but highly integrated, physically smaller or have other attributes making them attractive to embedded engineers. The advantage of this approach is that low-cost commodity components may be used along with the same software development tools used for general software development. Systems built in this way are still regarded as embedded since they are integrated into larger devices and fulfill a single role. Examples of devices that may adopt this approach are ATMs and arcade machines, which contain code specific to the application.
However, most ready-made embedded systems boards are not PC-centered and do not use the ISA or PCI busses. When a System-on-a-chip processor is involved, there may be little benefit to having a standarized bus connecting discrete components, and the environment for both hardware and software tools may be very different.
One common design style uses a small system module, perhaps the size of a business card, holding high density BGA chips such as an ARM-based System-on-a-chip processor and peripherals, external flash memory for storage, and DRAM for runtime memory. The module vendor will usually provide boot software and make sure there is a selection of operating systems, usually including Linux and some real time choices. These modules can be manufactured in high volume, by organizations familiar with their specialized testing issues, and combined with much lower volume custom mainboards with application-specific external peripherals.
Implementation of embedded systems have advanced, embedded systems can easily be implemented with already made boards which are based on worldwide accepted platform. These platforms include but not limited to arduino, raspberry pi etc.
ASIC and FPGA solutions
A common array of n configuration for very-high-volume embedded systems is the system on a chip (SoC) which contains a complete system consisting of multiple processors, multipliers, caches and interfaces on a single chip. SoCs can be implemented as an application-specific integrated circuit (ASIC) or using a field-programmable gate array (FPGA).
Embedded Systems talk with the outside world via peripherals, such as:
- Serial Communication Interfaces (SCI): RS-232, RS-422, RS-485 etc.
- Synchronous Serial Communication Interface: I2C, SPI, SSC and ESSI (Enhanced Synchronous Serial Interface)
- Universal Serial Bus (USB)
- Multi Media Cards (SD Cards, Compact Flash etc.)
- Networks: Ethernet, LonWorks, etc.
- Fieldbuses: CAN-Bus, LIN-Bus, PROFIBUS, etc.
- Timers: PLL(s), Capture/Compare and Time Processing Units
- Discrete IO: aka General Purpose Input/Output (GPIO)
- Analog to Digital/Digital to Analog (ADC/DAC)
- Debugging: JTAG, ISP, ICSP, BDM Port, BITP, and DB9 ports.
As with other software, embedded system designers use compilers, assemblers, and debuggers to develop embedded system software. However, they may also use some more specific tools:
- In circuit debuggers or emulators (see next section).
- Utilities to add a checksum or CRC to a program, so the embedded system can check if the program is valid.
- For systems using digital signal processing, developers may use a math workbench such as Scilab / Scicos, MATLAB / Simulink, EICASLAB, MathCad, Mathematica,or FlowStone DSP to simulate the mathematics. They might also use libraries for both the host and target which eliminates developing DSP routines as done in DSPnano RTOS.
- A model based development tool like VisSim lets you create and simulate graphical data flow and UML State chart diagrams of components like digital filters, motor controllers, communication protocol decoding and multi-rate tasks. Interrupt handlers can also be created graphically. After simulation, you can automatically generate C-code to the VisSim RTOS which handles the main control task and preemption of background tasks, as well as automatic setup and programming of on-chip peripherals.
- Custom compilers and linkers may be used to optimize specialized hardware.
- An embedded system may have its own special language or design tool, or add enhancements to an existing language such as Forth or Basic.
- Another alternative is to add a real-time operating system or embedded operating system, which may have DSP capabilities like DSPnano RTOS.
- Modeling and code generating tools often based on state machines
Software tools can come from several sources:
- Software companies that specialize in the embedded market
- Ported from the GNU software development tools
- Sometimes, development tools for a personal computer can be used if the embedded processor is a close relative to a common PC processor
As the complexity of embedded systems grows, higher level tools and operating systems are migrating into machinery where it makes sense. For example, cellphones, personal digital assistants and other consumer computers often need significant software that is purchased or provided by a person other than the manufacturer of the electronics. In these systems, an open programming environment such as Linux, NetBSD, OSGi or Embedded Java is required so that the third-party software provider can sell to a large market.
Embedded debugging may be performed at different levels, depending on the facilities available. From simplest to most sophisticated they can be roughly grouped into the following areas:
- Interactive resident debugging, using the simple shell provided by the embedded operating system (e.g. Forth and Basic)
- External debugging using logging or serial port output to trace operation using either a monitor in flash or using a debug server like the Remedy Debugger which even works for heterogeneous multicore systems.
- An in-circuit debugger (ICD), a hardware device that connects to the microprocessor via a JTAG or Nexus interface. This allows the operation of the microprocessor to be controlled externally, but is typically restricted to specific debugging capabilities in the processor.
- An in-circuit emulator (ICE) replaces the microprocessor with a simulated equivalent, providing full control over all aspects of the microprocessor.
- A complete emulator provides a simulation of all aspects of the hardware, allowing all of it to be controlled and modified, and allowing debugging on a normal PC. The downsides are expense and slow operation, in some cases up to 100X slower than the final system.
- For SoC designs, the typical approach is to verify and debug the design on an FPGA prototype board. Tools such as Certus  are used to insert probes in the FPGA RTL that make signals available for observation. This is used to debug hardware, firmware and software interactions across multiple FPGA with capabilities similar to a logic analyzer.
Unless restricted to external debugging, the programmer can typically load and run software through the tools, view the code running in the processor, and start or stop its operation. The view of the code may be as HLL source-code, assembly code or mixture of both.
Because an embedded system is often composed of a wide variety of elements, the debugging strategy may vary. For instance, debugging a software- (and microprocessor-) centric embedded system is different from debugging an embedded system where most of the processing is performed by peripherals (DSP, FPGA, co-processor). An increasing number of embedded systems today use more than one single processor core. A common problem with multi-core development is the proper synchronization of software execution. In such a case, the embedded system design may wish to check the data traffic on the busses between the processor cores, which requires very low-level debugging, at signal/bus level, with a logic analyzer, for instance.
Real-time operating systems (RTOS) often supports tracing of operating system events. A graphical view is presented by a host PC tool, based on a recording of the system behavior. The trace recording can be performed in software, by the RTOS, or by special tracing hardware. RTOS tracing allows developers to understand timing and performance issues of the software system and gives a good understanding of the high-level system behaviors. Commercial tools like RTXC Quadros or IAR Systems exists.
Embedded systems often reside in machines that are expected to run continuously for years without errors, and in some cases recover by themselves if an error occurs. Therefore, the software is usually developed and tested more carefully than that for personal computers, and unreliable mechanical moving parts such as disk drives, switches or buttons are avoided.
Specific reliability issues may include:
- The system cannot safely be shut down for repair, or it is too inaccessible to repair. Examples include space systems, undersea cables, navigational beacons, bore-hole systems, and automobiles.
- The system must be kept running for safety reasons. "Limp modes" are less tolerable. Often backups are selected by an operator. Examples include aircraft navigation, reactor control systems, safety-critical chemical factory controls, train signals.
- The system will lose large amounts of money when shut down: Telephone switches, factory controls, bridge and elevator controls, funds transfer and market making, automated sales and service.
A variety of techniques are used, sometimes in combination, to recover from errors—both software bugs such as memory leaks, and also soft errors in the hardware:
- watchdog timer that resets the computer unless the software periodically notifies the watchdog
- subsystems with redundant spares that can be switched over to
- software "limp modes" that provide partial function
- Designing with a Trusted Computing Base (TCB) architecture ensures a highly secure & reliable system environment
- A Hypervisor designed for embedded systems, is able to provide secure encapsulation for any subsystem component, so that a compromised software component cannot interfere with other subsystems, or privileged-level system software. This encapsulation keeps faults from propagating from one subsystem to another, improving reliability. This may also allow a subsystem to be automatically shut down and restarted on fault detection.
- Immunity Aware Programming
High vs low volume
For high volume systems such as portable music players or mobile phones, minimizing cost is usually the primary design consideration. Engineers typically select hardware that is just “good enough” to implement the necessary functions.
For low-volume or prototype embedded systems, general purpose computers may be adapted by limiting the programs or by replacing the operating system with a real-time operating system.
Embedded software architectures
There are several different types of software architecture in common use.
Simple control loop
In this design, the software simply has a loop. The loop calls subroutines, each of which manages a part of the hardware or software.
Some embedded systems are predominantly controlled by interrupts. This means that tasks performed by the system are triggered by different kinds of events; an interrupt could be generated, for example, by a timer in a predefined frequency, or by a serial port controller receiving a byte.
These kinds of systems are used if event handlers need low latency, and the event handlers are short and simple. Usually, these kinds of systems run a simple task in a main loop also, but this task is not very sensitive to unexpected delays.
Sometimes the interrupt handler will add longer tasks to a queue structure. Later, after the interrupt handler has finished, these tasks are executed by the main loop. This method brings the system close to a multitasking kernel with discrete processes.
A nonpreemptive multitasking system is very similar to the simple control loop scheme, except that the loop is hidden in an API. The programmer defines a series of tasks, and each task gets its own environment to “run” in. When a task is idle, it calls an idle routine, usually called “pause”, “wait”, “yield”, “nop” (stands for no operation), etc.
The advantages and disadvantages are similar to that of the control loop, except that adding new software is easier, by simply writing a new task, or adding to the queue.
Preemptive multitasking or multi-threading
In this type of system, a low-level piece of code switches between tasks or threads based on a timer (connected to an interrupt). This is the level at which the system is generally considered to have an "operating system" kernel. Depending on how much functionality is required, it introduces more or less of the complexities of managing multiple tasks running conceptually in parallel.
As any code can potentially damage the data of another task (except in larger systems using an MMU) programs must be carefully designed and tested, and access to shared data must be controlled by some synchronization strategy, such as message queues, semaphores or a non-blocking synchronization scheme.
Because of these complexities, it is common for organizations to use a real-time operating system (RTOS), allowing the application programmers to concentrate on device functionality rather than operating system services, at least for large systems; smaller systems often cannot afford the overhead associated with a generic real time system, due to limitations regarding memory size, performance, or battery life. The choice that an RTOS is required brings in its own issues, however, as the selection must be done prior to starting to the application development process. This timing forces developers to choose the embedded operating system for their device based upon current requirements and so restricts future options to a large extent. The restriction of future options becomes more of an issue as product life decreases. Additionally the level of complexity is continuously growing as devices are required to manage variables such as serial, USB, TCP/IP, Bluetooth, Wireless LAN, trunk radio, multiple channels, data and voice, enhanced graphics, multiple states, multiple threads, numerous wait states and so on. These trends are leading to the uptake of embedded middleware in addition to a real-time operating system.
Microkernels and exokernels
A microkernel is a logical step up from a real-time OS. The usual arrangement is that the operating system kernel allocates memory and switches the CPU to different threads of execution. User mode processes implement major functions such as file systems, network interfaces, etc.
In general, microkernels succeed when the task switching and intertask communication is fast and fail when they are slow.
Exokernels communicate efficiently by normal subroutine calls. The hardware and all the software in the system are available to and extensible by application programmers.
In this case, a relatively large kernel with sophisticated capabilities is adapted to suit an embedded environment. This gives programmers an environment similar to a desktop operating system like Linux or Microsoft Windows, and is therefore very productive for development; on the downside, it requires considerably more hardware resources, is often more expensive, and, because of the complexity of these kernels, can be less predictable and reliable.
Common examples of embedded monolithic kernels are embedded Linux and Windows CE.
Despite the increased cost in hardware, this type of embedded system is increasing in popularity, especially on the more powerful embedded devices such as wireless routers and GPS navigation systems. Here are some of the reasons:
- Ports to common embedded chip sets are available.
- They permit re-use of publicly available code for device drivers, web servers, firewalls, and other code.
- Development systems can start out with broad feature-sets, and then the distribution can be configured to exclude unneeded functionality, and save the expense of the memory that it would consume.
- Many engineers believe that running application code in user mode is more reliable and easier to debug, thus making the development process easier and the code more portable.
- Features requiring faster response than can be guaranteed can often be placed in hardware.
Additional software components
In addition to the core operating system, many embedded systems have additional upper-layer software components. These components consist of networking protocol stacks like CAN, TCP/IP, FTP, HTTP, and HTTPS, and also included storage capabilities like FAT and flash memory management systems. If the embedded device has audio and video capabilities, then the appropriate drivers and codecs will be present in the system. In the case of the monolithic kernels, many of these software layers are included. In the RTOS category, the availability of the additional software components depends upon the commercial offering.
- Communications server
- Cyber-physical system
- Electronic Control Unit
- Embedded operating systems
- Embedded software
- Information appliance
- Programming languages
- Real-time operating system
- Software engineering
- System on a chip
- System on module
- Ubiquitous computing
- For more details of MicroVGA see this PDF.
Hardware-in-the-loop (HIL) simulation, (or HWIL) is a technique that is used in the development and test of complex real-time embedded systems. HIL simulation provides an effective platform by adding the complexity of the plant under control to the test platform. The complexity of the plant under control is included in test and development by adding a mathematical representation of all related dynamic systems. These mathematical representations are referred to as the “plant simulation”. The embedded system to be tested interacts with this plant simulation.
- 1 How HIL works
- 2 Why use hardware-in-the-loop simulation?
- 2.1 Enhancing the quality of testing
- 2.2 Tight development schedules
- 2.3 High-burden-rate plant
- 2.4 Early process human factors development
- 3 Use in various disciplines
- 3.1 Automotive systems
- 3.2 Power electronics
- 3.3 Radar
- 3.4 Robotics
- 3.5 Power systems
- 3.6 Offshore systems
- 4 References
- 5 External links
How HIL works
An HIL simulation must include electrical emulation of sensors and actuators. These electrical emulations act as the interface between the plant simulation and the embedded system under test. The value of each electrically emulated sensor is controlled by the plant simulation and is read by the embedded system under test (feedback). Likewise, the embedded system under test implements its control algorithms by outputting actuator control signals. Changes in the control signals result in changes to variable values in the plant simulation.
For example, a HIL simulation platform for the development of automotive anti-lock braking systems may have mathematical representations for each of the following subsystems in the plant simulation:
- Vehicle dynamics, such as suspension, wheels, tires, roll, pitch and yaw;
- Dynamics of the brake system’s hydraulic components;
- Road characteristics.
Why use hardware-in-the-loop simulation?
In many cases, the most effective way to develop an embedded system is to connect the embedded system to the real plant. In other cases, HIL simulation is more efficient. The metric of development and test efficiency is typically a formula that includes the following factors: 1. Cost 2. Duration 3. Safety 4. Feasibility
The cost of the approach should be a measure of the cost of all tools and effort. The duration of development and testing affects the time-to-market for a planned product. Safety factor and development duration are typically equated to a cost measure. Specific conditions that warrant the use of HIL simulation include the following:
- Enhancing the quality of testing
- Tight development schedules
- High-burden-rate plant
- Early process human factor development
Enhancing the quality of testing
Usage of HiLs enhances the quality of the testing by increasing the scope of the testing. Ideally, an embedded system would be tested against the real plant, but most of the time the real plant itself imposes limitations in terms of the scope of the testing. For example, testing an engine control unit as a real plant can create the following dangerous conditions for the test engineer:
- Testing at or beyond the range of the certain ECU parameters (e.g. Engine parameters etc.)
- Testing and verification of the system at failure conditions
In the above mentioned test scenarios, HIL provides the efficient control and safe environment where test or application engineer can focus on the functionality of the controller.
Tight development schedules
The tight development schedules associated with most new automotive, aerospace and defense programs do not allow embedded system testing to wait for a prototype to be available. In fact, most new development schedules assume that HIL simulation will be used in parallel with the development of the plant. For example, by the time a new automobile engine prototype is made available for control system testing, 95% of the engine controller testing will have been completed using HIL simulation.
The aerospace and defense industries are even more likely to impose a tight development schedule. Aircraft and land vehicle development programs are using desktop and HIL simulation to perform design, test, and integration in parallel.
In many cases, the plant is more expensive than a high fidelity, real-time simulator and therefore has a higher-burden rate. Therefore, it is more economical to develop and test while connected to a HIL simulator than the real plant. For jet engine manufacturers, HIL simulation is a fundamental part of engine development. The development of Full Authority Digital Engine Controllers (FADEC) for aircraft jet engines is an extreme example of a high-burden-rate plant. Each jet engine can cost millions of dollars. In contrast, a HIL simulator designed to test a jet engine manufacturer’s complete line of engines may demand merely a tenth of the cost of a single engine.
Early process human factors development
HIL simulation is a key step in the process of developing human factors, a method of ensuring usability and system consistency using software ergonomics, human-factors research and design. For real-time technology, human-factors development is the task of collecting usability data from man-in-the-loop testing for components that will have a human interface.
An example of usability testing is the development of fly-by-wire flight controls. Fly-by-wire flight controls eliminate the mechanical linkages between the flight controls and the aircraft control surfaces. Sensors communicate the demanded flight response and then apply realistic force feedback to the fly-by-wire controls using motors. The behavior of fly-by-wire flight controls is defined by control algorithms. Changes in algorithm parameters can translate into more or less flight response from a given flight control input. Likewise, changes in the algorithm parameters can also translate into more or less force feedback for a given flight control input. The “correct” parameter values are a subjective measure. Therefore, it is important to get input from numerous man-in-the-loop tests to obtain optimal parameter values.
In the case of fly-by-wire flight controls development, HIL simulation is used to simulate human factors. The flight simulator includes plant simulations of aerodynamics, engine thrust, environmental conditions, flight control dynamics and more. Prototype fly-by-wire flight controls are connected to the simulator and test pilots evaluate flight performance given various algorithm parameters.
The alternative to HIL simulation for human factors and usability development is to place prototype flight controls in early aircraft prototypes and test for usability during flight test. This approach fails when measuring the four conditions listed above. Cost: A flight test is extremely costly and therefore the goal is to minimize any development occurring with flight test. Duration: Developing flight controls with flight test will extend the duration of an aircraft development program. Using HIL simulation, the flight controls may be developed well before a real aircraft is available. Safety: Using flight test for the development of critical components such as flight controls has a major safety implication. Should errors be present in the design of the prototype flight controls, the result could be a crash landing. Feasibility: It may not be possible to explore certain critical timings (e.g. sequences of user actions with millisecond precision) with real users operating a plant. Likewise for problematical points in parameter space that may not be easily reachable with a real plant but must be tested against the hardware in question.
Use in various disciplines
In context of automotive applications "Hardware-in-the-loop simulation systems provide such a virtual vehicle for systems validation and verification." Since in-vehicle driving tests for evaluating performance and diagnostic functionalities of Engine Management Systems are often time-consuming, expensive and not reproducible, HIL simulators allow developers to validate new hardware and software automotive solutions, respecting quality requirements and time-to-market restrictions. In a typical HIL Simulator, engine dynamics is emulated from mathematic models, executed by a dedicated real-time processor. In addition, an I/O unit allows the connection of vehicle sensors and actuators (which usually present high degree of non-linearity). Finally, the Electronic Control Unit (ECU) under test is connected to the system and stimulated by a set of vehicle maneuvers executed by the simulator. At this point, HIL simulation also offers a high degree of repeatability during testing phase.
In the literature, several HIL specific applications are reported and simplified HIL simulators were built according some specific purpose. When testing a new ECU software release for example, experiments can be performed in open loop and therefore several engine dynamic models are no longer required. The strategy is restricted to the analysis of ECU outputs when excited by controlled inputs. In this case, a Micro HIL system (MHIL) offers a simpler and more economic solution. Since complexity of models processing is dumped, a full-size HIL system is reduced into a portable device composed of a signal generator, an I/O board, and a console containing the actuators (external loads) to be connected to the ECU.
Hardware-in-the-Loop Simulation for Power Electronics systems is the next quantum leap in the evolution of HIL technologies. The ability to design and automatically test power electronics systems with HIL simulations will reduce development cycle, increase efficiency, improve reliability and safety of these systems for large number of applications. Indeed, power electronics is an enabling technology for hybrid electric vehicles, electric vehicles, variable speed wind turbines, solar photovoltaics, industry automation, electric trains etc. There are at least three strong reasons for using hardware-in-the-loop simulation for power electronics, namely:
- reduction of development cycle,
- demand to extensively test control hardware and software in order to meet safety and quality requirements, and
- need to prevent costly and dangerous failures.
The question is why are power electronics systems so different considering that HIL has been used in aerospace and automotive applications for decades? Power electronics systems are a class of dynamic systems that exhibit extremely fast dynamics due to high-frequency switching action of power electronics switches (e.g. IGBTs, MOSFETs, IGCTs, diodes etc.). Real-time simulations of switching transitions require digital processor speeds and latencies that can actually be met with off-the-shelf computer systems and with FPGA/CPU platform technologies making it 100 times faster than traditional computational methods to achieve high-resolution HIL for power electronics.
HIL simulation for radar systems have evolved from radar-jamming. Digital Radio Frequency Memory (DRFM) systems are typically used to create false targets to confuse the radar in the battlefield, but these same systems can simulate a target in the laboratory. This configuration allows for the testing and evaluation of the radar system, reducing the need for flight trials (for airborne radar systems) and field tests (for search or tracking radars), and can give an early indication to the susceptibility of the radar to electronic warfare (EW) techniques.
Techniques for HIL simulation have been recently applied to the automatic generation of complex controllers for robots. A robot uses its own real hardware to extract sensation and actuation data, then uses this data to infer a physical simulation (self-model) containing aspects such as its own morphology as well as characteristics of the environment. Algorithms such as Back-to-Reality (BTR) and Estimation Exploration (EEA) have been proposed in this context.
In recent years, HIL for power systems has been used for verifying the stability, operation, and fault tolerance of large-scale electrical grids. Current-generation real-time processing platforms have the capability to model large-scale power systems in real-time. This includes systems with more than 10,000 buses with associated generators, loads, power-factor correction devices, and network interconnections. These types of simulation platforms enable the evaluation and testing of large-scale power systems in a realistic emulated environment. Moreover, HIL for power systems has been used for investigating the integration of distributed resources, next-generation SCADA systems and power management units, and static synchronous compensator devices.
The key feature of rapid-prototyping systems is fast transfer of slow-motion off-line simulation, running on a PC or workstation, to a real-time simulation running on a dedicated processor linked to the physical hardware components. From energy production, to aerospace and aeronautics, to robotics, automotive, naval, and defense, real-time simulation and support for hardware-in-the-loop (HIL) modeling are increasingly recognized as essential tools for a design.
The mobile manipulator consists of a two-wheeled differential-drive platform and a 5 DOF arm with servo joints. An array of 8 sonar sensors covers the full 180 degrees view in front of robot, and the wheels and manipulator joints are equipped with optical encoders. The robot can communicate through radio frequency with a PC that hosts a comprehensive software application for navigating and controlling the robot. An on-board microcontroller handles the real-time control dealing with sensor and actuator signals and radio communications with the host PC.
A remotely accessible real-time control experiment consisting of a flexible beam clamped at one end while the other end is equipped with a servo motor that drives an eccentric load. The purpose of the experiment is to design a controller in MATLAB® to dampen out the vibrations in the beam.
Strain Gage and Materials Testing
A remotely accessible stress-strain test bed allows students to deterermine the Elastic Modulus, Poisson's Ratio, and the material loss factors for a variety of pre-gaged cantilever beams using surface mounted Constantan alloy strain gages. Comparisons can be made between a variety of materials including several advanced composites. The test bed is a custom experimentation platform developed from the ground up for teleoperation.
Photoelastic Stress Analysis
A remotely accessible photoelastic stress test bed allows students to visual stress gradients in a variety of structure components using polarized light. Stress concentration, distribution and failure criteria are determined using computer-aided image analysis of pictures collected using a digital camera. The test bed is a custom experimentation platform developed from the ground up for teleoperation.
Aerodynamic Subsonic Forces on an Airfoil
A remotely accessible subsonic wind tunnel allows students to obtain the section lift and drag coefficients for a symmetric airfoil by measuring the pressure distributions on the airfoil surface using a variety of pressure sensors. Some dynamic observations can also performed for the airfoil pitching with various amplitudes and speeds. The wind tunnel was adapted for teleoperation using an integrated data-acquisition hardware control system.
Supersonic Flow and Shockwaves
A remotely accessible supersonic wind tunnel allows students to characterize a supersonic flow using static and impact pressure probes to monitor velocities in the tunnel at various points along the flow. A Schlieren camera system is used to visualize shock waves from objects placed in the tunnel test section. The wind tunnel was adapted for teleoperation using an integrated data-acquisition hardware control system.
The design laboratory is a learning space for second year students in AER201 - Engineering Students. The laboratory is equipped with 34 workstations each with a design computer, function generator, oscilloscope, and Hardware-In-the-Loop (HIL) simulation platform.
This research outlines a holistic concurrent design methodology that enhances communication between designers from various disciplines through introducing the universal notion of satisfaction and expressing the holistic behaviour of multidisciplinary systems using the concept of energy. The methodology formalizes subjective aspects of design, resulting in the simplification of the multi-objective constrained optimization process. The impact of the designer’s subjective attitude throughout the design is also formalized and adjusted based on a holistic system performance by utilizing an energy-based model of multidisciplinary systems. The application of the methodology to a 5 D.O.F industrial robot manipulator has shown promising results.
The research focuses on the development of a concurrent design framework for autonomously reconfigurable mechatronic systems. Using manifold topology as an abstract representation of the system configuration, Reconfigurable Mechatronics utilizes a template-based approach along with manifold path planning to select a task-based desirable configuration for the inherently-redundant multidisciplinary systems. The merits of the methodology are shown through its applications to reconfigurable robotic rovers as well as a newly-designed 18 D.O.F. autonomously-reconfigurable serial manipulator.
Mechatronics by Analogy
The research postulates that by establishing a similarity relation between a complex system and a number of simpler systems it is possible to design the former using the analysis and synthesis means developed for the simpler systems. The methodology provides a framework for concurrent engineering of multidisciplinary systems while maintaining the transparency of the system behaviour through making formal analogies between the system and those with more tractable dynamics. The methodology is successfully applied to the design of a monopod robotic leg.
Heterogeneous Robotic Team
The implementation of knowledge-base hierarchical control schemes is studied for developing new architectures that allow a team of non-uniform (with respect to both software and hardware) rover platforms to communicate and collaborate in performing various tasks and also to enhance their collective performance in time, without the intervention of a central server or operator. Using multiple robots with diverse capabilities can result in performing complex tasks by simple individual robot platforms, as can be observed repeatedly in the nature, such as insect colonies. The challenge, however, is to build a simple yet effective means of communication and knowledge integration. The focus of this research is on the methods of parsing the tasks of overall mission objectives and mapping them onto a heterogeneous group of robotic platforms, as well as techniques for the integration of perceptual information packets obtained from heterogeneous robots and their synthesis into a coherent picture for a remote operator’s situational awareness.
Intelligent Robotic Swarm
The research addresses Insect Robotics, which is inspired by social insects. Five major characteristics distinguish insect robots from other multi-agent robotic teams, namely homogeneity, simplicity and compactness, agent-to-agent communication, distributed control strategy, and social learning. Although social insects have simple brains, they are capable of navigating, interacting and cooperating with each other. These characteristics are implemented through the creation of large quantities (tens to thousands) of rover platforms, equipped with sensors and on-board processors, which are small, simple, and cheap, and which collectively behave similarly to a colony of ants in terms of navigation, communication, interaction, and social learning. Such a study will be unique for its attempt to develop a systematic and pragmatic distributed control architecture that accommodates both individualistic navigation/localization and social collaboration/interaction and learning.
Robot Social Learning
The research studies interactions between collective, cooperative and collaborative behaviours of robotic teams, and attempts to develop hybrid multi-agent learning algorithms for enhancing such social behaviours concurrently. Of particular interest are questions such as: i) how to enhance one’s advice; ii) how to build upon advice from others and generate new advice; and iii) how to strike a balance between improving individual performance and enhancing group advice-sharing.
Free-base Robot Manipulation
The research aims at reformulating the kinematic and dynamic equations of free-base manipulators, based on symplectic geometry, in order to obtain suitable laws for the concurrent base-manipulator motion control. The goal is to develop new generation of free-flying manipulators that can be released from the base station for reaching larger workspaces. Free-flying and free-floating manipulators are widely used in space applications. In terrestrial applications, mobile manipulators are also gaining popularity for their large workspace. Since these mechanisms are mounted on non-stationary bases, their kinematics, dynamics and controls are highly coupled with those of their bases. This research aims at reformulating the kinematic and dynamic equations of free-base manipulators, in general, based on symplectic geometry, in order to obtain suitable laws for the concurrent base-manipulator motion control.
Aerospace Remote Experimentation
The research attempts to establish a transformative vision of remotely-accessible aerospace laboratories for both pedagogical and research purposes. The goal is to enable students and researchers to reliably operate remote devices (such as manipulators) in space and also conduct from Earth future experiments on the moon. Furthermore, a common problem with presenting experimental research results in various communities, such as robotics and aerospace, is the lack of a "unique" setup by which the research hypotheses have been examined. In robotics, for example, a new control algorithm, works marvelously on a 4-degree-of-freedom robot with short links and comes to the literature as a breakthrough. But, later on another researcher argues that the same algorithm causes poor performance on a different robot. Thus, researchers of a community around the world would welcome a "unique remote-access laboratory" with which all the new research works are examined, so that experimental data and outcomes have a unified and consistent meaning for the entire community. For example, a central aerospace robotics laboratory, would be an interesting asset for the researchers around the world to access it remotely and perform their experiments.
Robotic Hardware-in-the-loop Simulation
This research attempts to develop a practical framework for the concurrent engineering of reconfigurable robot manipulators through the development of a hardware-in-the-loop simulation platform. Reconfigurable robot manipulators have recently received a growing increasing attention from both research community and industry for their potential benefits of versatility in task orientation and adaptation to changing environments. Design of such systems is complicated, as the conventional “decoupled” or “loosely coupled” approaches cannot provide satisfactory solutions with the proliferation of advanced high speed automation systems. This research addresses a knowledge-based concurrent analysis and synthesis methodology for the detail-level engineering of reconfigurable robot manipulators, with the aid of a scalable, object-oriented hardware-in-the-loop simulation platform. Key features of the platform are: hierarchical and modular architecture, knowledge-base c apability, object-oriented modeling and design, reconfigurability and scalability, and distance communication between distributed designers and remote hardware/software modules.
Detail-level Concurrent Engineering of Mechatronic Systems
Systematic concurrent methodologies, such as Linguistic Mechatronics, are applied to realistic design problems to investigate their merits and also find better solutions for complex systems. The market-driven need for multidisciplinary design of mechatronic systems has recently been recognized by the research community and by funding organizations. However, much of the literature on mechatronics design focuses on specific applications, and the multidisciplinary design in them is mostly based on ad hoc strategies for incrementally changing the conventional sequential design into a subsystem-based methodology. Some of the applications include fast-running legged robots, autonomously reconfigurable robot manipulators, rehabilitation robotics, etc.
Object-oriented Mechatronics Modeling and Design
The research extends some of the current object-oriented modeling approaches to concurrent modeling and design of aerospace systems, through which various functional modules can be built and synthesized by integrating programmatic elements (mechanical, electronic, computer, etc.) from pre-assembled libraries of reusable components. Interpretation of aerospace systems, in the form of mathematical textual coding, requires fundamental multidisciplinary knowledge and expertise. Hence, team work is an essential feature of aerospace systems design. Nonetheless, different disciplines conceptualize and represent their knowledge differently. Consequently, it becomes difficult for the participating disciplines to communicate their points of view, let alone collaborate with each other as required in a concurrent design. Hence, a unified system description language is essential for the concurrent design, which interprets different components of the aerospace systems without addressing their underlying theory. Furthermore, a graphical design entry tool with guided and automatic synthesis features will present all functional modules as finite state machines, so that designers can combine them concurrently using a high-level state-flow language. The entire object-oriented interface provides the designers with a hierarchical abstraction from conceptualization through functional decomposition to detailed design.
The research attempts to develop a hybrid framework for teaching mechatronics that synergistically utilizes the two rival learning theories, namely behaviourism and constructivism. Behaviourism conceives learning as dissemination of knowledge via abstract representation of reality, and thus prescribes teaching as transfer of the knowledge from expert to learner. On the other hand, constructivism sees knowledge as a subjective and dynamic product of knower’s experiential world constructed through the senses and social interactions, and thus defines the role of teacher not to dispense knowledge but to serve as a creative mediator and facilitator. The premise of this research is that teaching mechatronics requires both direct instruction and learner-controlled knowledge construction. Hence, both theories must be utilized in a unified framework. Such a framework can be built based on an instructional design theory, namely Elaboration Theory, which allows a gradual transition from content-based to activity-based learning process. One key outcome of the research is the development of an affordable, comprehensive, and transparent Personal Mechatronics Laboratory toolkit for students and researchers.
Courses of at least 90 credits at first cycle including the following knowledge/courses. The courses D0009E Introduction to programming, E0013E Basic course in electrical engineering (alternative E0003E and E0007E) and R0002E Modelling and Control or equvivalent. This means that the student should be able to program in a high-level language, analyse simple electronic circuits and furthermore be familiar with basic control theory. Alternative: Alternative to completed courses can be corresponding knowledge acquired through work with the electronics sector.
The selection is based on 20-285 credits
The student will
after the course be able to design and program a mechatronic system including
mechanics, electronic sensors, simple electronics, a control circuit
(microcomputer) and electrical motors.
- After the course the student should be able to construct and analyze a mechatronic system.
- The student should be able to critical and creative deal with issues and technology solutions, plan and execute a skilled task, and work in teams with different compositions.
- The student should be able to identify the need for further knowledge and to continuously upgrade their skills.
- The student should be able to integrate knowledge, model, simulate, and anticipate a process. This is shown during the execution of the project that is a part of the examination in the course.
- Transistors, transistor circuits with inductive loads.
- Radiometry, the photodiode, amplifiers for photodiodes.
- Digital circuitry, peripheral circuits and control circuits.
- Introduction to the C programming language, program development tools.
- Electric motors and drive circuits for motors.
- Project Mobile Robot.
- OrCAD (PSpice) are used for description and simulation of designed circuits both for the laboratory work and in the project.
Courses of at least 90 credits at first cycle including the following knowledge/courses. Basic knowledge in the subject of Automatic control. Concepts such as transfer function, Bode plot, poles and zeros, impulse response and step response, feedback and PID controllers should be known. Sound knowledge on the Laplace transform and experience with Matlab is also presumed. These prerequisites correspond to the course R0002E or R0003E. Alternative: Alternative to completed courses can be corresponding knowledge acquired through work within the processindustry or electronics sector.
The course aim is for students to acquire in-depth knowledge of feedback systems, their design and their use in control engineering applications.
The students should be able to:
- demonstrate broad knowledge of control engineering methods and terminology.
- demonstrate broad knowledge of mathematical methods to analyze dynamic systems
- demonstrate the ability to model dynamic systems based on empirical data
- use standard methods for designing and analyzing controllers.
- demonstrate an ability to, in a group,simulate, analyze, evaluate and implement controllers for a real process and to report on this work, both orally and in writing
- demonstrate the ability to identify constraintsof simple controllers and the need for more advanced methods.
- show insight into how the use of automaticcontrol can contribute to sustainable development through reduced consumption of resources.
engineering must change to respond to the needs of the modern
practicing engineer. What is needed is a balance between
theory and practice, between academic rigor and the best
practices of industry, presented in an integrated way that feeds
the needs of modern practicing engineers and the companies
they work for. The new Master of Engineering in
Mechatronics program attempts to remedy these deficiencies.
The key element is the one-credit module which:
balances theory and practice where concepts are applicationdriven,
not theory-driven; identifies and understands
industrial best practices by dissecting them into engineering
and mathematical fundamental models; achieves innovation by
assembling these fundamental models into new products and
processes; analyzes both existing and new products and
processes using computer simulations within a topic area;
demonstrates hardware to show system realization and validity
of modeling and analysis results; shows videos of industry
systems and interviews with industry experts; discusses best
practices to achieve sustainability of products; and maintains
flexibility through 15 one-hour blocks of instruction – a 5-week
mini-course or longer if preferred
I. Current Situation
It is widely recognized that the future of the U.S. and indeed
our everyday lives are increasingly dependent on scientific
and technical innovation. However, the United States is in
an innovation crisis fueled by a crisis in engineering
education. The innovation shortfall of the past decade is
real and there have been far too few commercial innovations
that can transform lives and solve urgent human problems.
Society’s problems are getting harder, broader, and deeper
and are multidisciplinary in nature. They require a
multidisciplinary systems approach to solve them and
present-day engineering education is not adequately
preparing young engineers for the challenge. Basic
engineering skills have become commodities worldwide.
To be competitive, U.S. engineers must provide high value
by being immediate, innovative, integrative, conceptual, and
multidisciplinary. In addition, innovation is local – you
don’t import it and you don’t export it! You create it! It is
a way of thinking, communicating, and doing
Innovation, the process of inventing something new,
desirable, useful, and sustainable, happens at the
intersection of technology, business, human factors, and
complexity (Figure 1). In addition to addressing the
nation’s needs for economic growth and defense, engineers,
scientists, and mathematicians must identify and solve
societal problems that benefit people, their health and
quality of life, and the environment. The STEM (science,
technology, engineering, and mathematics) disciplines must
embrace a renewed human-centered focus and along with
that a face that attracts a diversity of students interested in
serving people at home and worldwide. STEM students, as
well as students from the humanities, arts, social sciences,
and business, must all realize they are partners in solving
the innovation crisis. They each play a vital role and must
be able to identify the needs of people, to critically think
and solve problems, to generate human-centered ideas and
rapidly prototype concepts, to integrate human values and
business into concepts, to manage complexity, to work in
multidisciplinary teams, and to effectively communicate
results. The quality of STEM education in innovation, both
in K-12 and at universities, has a direct impact on our
ability as a nation to compete in the increasingly
competitive global arena.
Engineering, science, and mathematics educators face
daunting challenges to prepare this next wave of STEM
professionals. In general, the current preparation of
students is inadequate for the challenge. Students focus on
facts, tests, and grades and fail to understand concepts and
processes. They are unable to integrate knowledge,
processes, techniques, and tools, both hardware and
software, to solve a multidisciplinary problem. Students
need first, and foremost, to become critical-thinking
problem solvers. Indeed, one of the great failures in STEM
education has been the inability of graduating students to
integrate all they have learned in the solution of a real-world
problem, as the cartoon suggests
The situation for industrial professional engineering is very
similar, as they are products of our failing engineering
educational system. This situation has been exacerbated by
the current economic crisis and is captured by the cartoon
A College of Engineering must place renewed emphasis on
genuine university - industry interaction to create a culture
of innovation both throughout the College of Engineering
and within industry partner companies. This interaction
must be one of mutual collaboration, as only through a
balance of theory and practice, i.e., academic rigor and best
industrial practices, can the challenging multidisciplinary
problems be solved
Multidisciplinary engineering system design deals with the
integrated and optimal design of a physical system,
including sensors, actuators, and electronic components, and
its embedded digital control system
Discovery Learning is at the core of a College of
Engineering and is best defined by the student commitments
or outcomes it brings about than the teaching methods used:
critical thinking, independent inquiry, responsibility for
one’s own learning, and intellectual growth and
development. There are a range of strategies used to
promote learning, e.g., interactive lecture, discussion,
problem-based learning, case studies, but no exclusive use
of traditional lecturing! Instructors assist students in
mastering and learning through the process of active
investigation. It is student-centered with a focus on student
The integration is respect to both hardware components and
information processing. An optimal choice must be made
with respect to the realization of the design specifications in
the different domains. Performance, reliability, low cost,
robustness, efficiency, and sustainability are absolutely
essential. It is truly a mechatronic system, as the name
“mechatronics” does not justmean electro-mechanical
There are two keys to innovation through mechatronic
system design. The first is Human-Centered Design (HCD).
HCD requires interdisciplinary collaboration, an iterative
process with frequent prototyping, and engagement with
real people. As the cost of complexity has decreased
dramatically, the quantity of complexity and information
has increased just as dramatically, while human evolution,
our ability to deal with inherent complexity in powerful
systems, has remained constant (Figure 7). HCD helps
bridge the gap
The second key is system-level, model-based design. Once
a system is in development, correcting a problem costs 10
times as much as fixing the same problem in concept. If the
system has been released, it costs 100 times as much.
System-level, model-based design addresses this head on.
The best multidisciplinary systemdesign companies excel at
communicating design changes across disciplines,
partitioning multiple technologies present and allocating
design requirements to specific systems, subsystems, and
components, and validating system behavior with modeling
and simulation (virtual prototyping) of integrated
mechanical, electrical, and software components
Undergraduate engineering education today is ineffective in
preparing students for multidisciplinary system integration
and optimization – exactly what is needed by companies to
become innovative and gain a competitive advantage in this
global economy. While there is some movement in
engineering education to changethat, this change is not
easy, as it involves a cultural change from the silo-approach
to a holistic approach. In addition, problems today in
energy, environment, health care, and water resources, for
example, cannot be solved bytechnology alone. Only a
comprehensive problem-solving approach addressing the
issues of feasibility, viability, desirability, usability, and
sustainability will lead to a complete, effective solution. As
the Figure 8 shows, the modern professional engineer must
have depth in an engineering discipline with
multidisciplinary engineering breadth and a balance
between theory and practice.
A modern multidisciplinary system engineering design team
– a mechatronic system design team – most often takes the
form shown in Figure 9, with all participants knowledgeable
in controls, as it is such a pervasive, enabling technology.
Engineering programs need more than four years to be truly
effective. Practicing engineers usually pursue a graduate
degree to fill the gaps in their undergraduate education and
gain further knowledge and insight. Typically the graduate
degree is more of the same with less relevance, practicality,
integrative insight, and hands-on experience, and more in
depth theory that often is way beyond what most practicing
engineers will ever use. They are siloed degrees in siloed
institutions that often become very specialized. Most
industries need problem solvers across disciplines rather
than experts who know one thing really well. These
graduate programs involve a selection of 10-12 three-credit
courses from several departments, usually chosen by the
student for scheduling convenience. Integration of concepts
is left up to the student, as graduate courses are rarely taught
in an integrated way. Each is its own stand-alone entity.
Aggravating the problem is the fact that practicing
engineers cannot take a one-to-two-year leave of absence
from a company to get a graduate degree. While practicing
engineers can take courses bydistance education, a threecredit course offered in a semester format can often be
overwhelming from a time-commitment point of view and
further complicates the integration of concepts. Students
learn better in small chunks and not always at the same rate.
In addition, the current distance education model is flawed
as it tries to capture a lecture, with a camera in the back of a
room, and not a learning environment.
The masters degree must change to respond to the needs of
the modern practicing engineer. What is needed is a
balance between theory and practice, between academic
rigor and the best practices of industry, presented in an
integrated way that feeds the needs of modern practicing
engineers and the companies they work for. The new
Master of Engineering in Mechatronics program attempts to
remedy these deficiencies. Figure 10 represents a new
approach to graduate engineering education. The key
element is the one-credit module which:
• Balances theory and practice where concepts are
application-driven, not theory-driven. Important
industry applications are studied with the goal to relate
physical operation to engineering fundamentals through
modeling and analysis.
• Identifies and understands industrial best practices by
dissecting them into engineering and mathematical
• Achieves innovation by assembling these fundamental
models into new products and processes.
• Analyzes both existing and new products and processes
using computer simulations within a topic area.
• Demonstrates hardware to show system realization and
validity of modeling and analysis results.
• Shows videos of industry systems and interviews with
• Discusses best practices to achieve sustainability of
• Maintains flexibility through 15 one-hour blocks of
instruction – a 5-week mini-course or longer if
All instruction is done via video with instruction interlaced
with industrial interviews, laboratory experiments, and
editorial sidebars – not just a camera at the back of a room.
The modules can be used by both non-degree and degreeseeking students, and also for industry short courses.
These modules all then feed into four three-credit, casestudy courses, taking the student from the user and problem,
to concept, to implementation, while emphasizing
integration, trade-offs, and optimization at every step. An
on-site culminating experience concludes the program
allowing the student to put it all together in a six-credit
The Figure 11 shows the integration of these modules in a
multidisciplinary engineering system design. Different
modules can be added, while others deleted, depending on
the application area.
This program doesn’t yet exist, but there is widespread
industry and university support for its development. The
content for these modules and courses resides in textbooks,
industry application papers, and the minds of engineers and
professors, so the development challenge is great, but the
need is urgent! Modules and courses are presently being
developed. Examples of the type of presentation for the
Modeling Module can be found at
Marine propulsion is the mechanism or system used to generate thrust to move a ship or boat across water. While paddles and sails are still used on some smaller boats, most modern ships are propelled by mechanical systems consisting of an electric motor or engine turning a propeller, or less frequently, in pump-jets, an impeller. Marine engineering is the discipline concerned with the engineering design process of marine propulsion systems.
Marine steam engines were the first mechanical engines used in marine propulsion, however they have mostly been replaced by two-stroke or four-stroke diesel engines, outboard motors, and gas turbine engines on faster ships. Nuclear reactors producing steam are used to propel warships and icebreakers. Nuclear reactors to power commercial vessels has not been adopted by the marine industry. Electric motors using electric battery storage have been used for propulsion on submarines and electric boats and have been proposed for energy-efficient propulsion. Development in liquefied natural gas (LNG) fueled engines are gaining recognition for their low emissions and cost advantages. Stirling engines, which are more efficient, quieter, smoother running producing less harmful emissions than diesel engines, propel a number of small submarines. The Stirling engine has yet to be upscaled for larger surface ships.
Until the application of the coal-fired steam engine to ships in the early 19th century, oars or the wind were used to assist watercraft propulsion. Merchant ships predominantly used sail, but during periods when naval warfare depended on ships closing to ram or to fight hand-to-hand, galley were preferred for their manoeuvrability and speed. The Greek navies that fought in the Peloponnesian War used triremes, as did the Romans at the Battle of Actium. The development of naval gunnery from the 16th century onward meant that manoeuvrability took second place to broadside weight; this led to the dominance of the sail-powered warship over the following three centuries.
In modern times, human propulsion is found mainly on small boats or as auxiliary propulsion on sailboats. Human propulsion includes the push pole, rowing, and pedals.
Propulsion by sail generally consists of a sail hoisted on an erect mast, supported by stays, and controlled by lines made of rope. Sails were the dominant form of commercial propulsion until the late nineteenth century, and continued to be used well into the twentieth century on routes where wind was assured and coal was not available, such as in the South American nitrate trade. Sails are now generally used for recreation and racing, although innovative applications of kites/royals, turbosails, rotorsails, wingsails, windmills and SkySails's own kite buoy-system have been used on larger modern vessels for fuel savings.
Reciprocating steam engines
The development of piston-engined steamships was a complex process. Early steamships were fueled by wood, later ones by coal or fuel oil. Early ships used stern or side paddle wheels, while later ones used screw propellers.
The first commercial success accrued to Robert Fulton's North River Steamboat (often called Clermont) in US in 1807, followed in Europe by the 45-foot Comet of 1812. Steam propulsion progressed considerably over the rest of the 19th century. Notable developments include the steam surface condenser, which eliminated the use of sea water in the ship's boilers. This, along with improvements in boiler technology, permitted higher steam pressures, and thus the use of higher efficiency multiple expansion (compound) engines. As the means of transmitting the engine's power, paddle wheels gave way to more efficient screw propellers.
Steam turbines were fueled by coal or, later, fuel oil or nuclear power. The marine steam turbine developed by Sir Charles Algernon Parsons raised the power-to-weight ratio. He achieved publicity by demonstrating it unofficially in the 100-foot Turbinia at the Spithead Naval Review in 1897. This facilitated a generation of high-speed liners in the first half of the 20th century, and rendered the reciprocating steam engine obsolete; first in warships, and later in merchant vessels.
In the early 20th century, heavy fuel oil came into more general use and began to replace coal as the fuel of choice in steamships. Its great advantages were convenience, reduced manpower by removal of the need for trimmers and stokers, and reduced space needed for fuel bunkers.
In the second half of the 20th century, rising fuel costs almost led to the demise of the steam turbine. Most new ships since around 1960 have been built with diesel engines. The last major passenger ship built with steam turbines was the Fairsky, launched in 1984. Similarly, many steam ships were re-engined to improve fuel efficiency. One high profile example was the 1968 built Queen Elizabeth 2 which had her steam turbines replaced with a diesel-electric propulsion plant in 1986.
Most new-build ships with steam turbines are specialist vessels such as nuclear-powered vessels, and certain merchant vessels (notably Liquefied Natural Gas (LNG) and coal carriers) where the cargo can be used as bunker fuel.
New LNG carriers (a high growth area of shipping) continue to be built with steam turbines. The natural gas is stored in a liquid state in cryogenic vessels aboard these ships, and a small amount of 'boil off' gas is needed to maintain the pressure and temperature inside the vessels within operating limits. The 'boil off' gas provides the fuel for the ship's boilers, which provide steam for the turbines, the simplest way to deal with the gas. Technology to operate internal combustion engines (modified marine two-stroke diesel engines) on this gas has improved, however, such engines are starting to appear in LNG carriers; with their greater thermal efficiency, less gas is burnt. Developments have also been made in the process of re-liquifying 'boil off' gas, letting it be returned to the cryogenic tanks. The financial returns on LNG are potentially greater than the cost of the marine-grade fuel oil burnt in conventional diesel engines, so the re-liquefaction process is starting to be used on diesel engine propelled LNG carriers. Another factor driving the change from turbines to diesel engines for LNG carriers is the shortage of steam turbine qualified seagoing engineers. With the lack of turbine powered ships in other shipping sectors, and the rapid rise in size of the worldwide LNG fleet, not enough have been trained to meet the demand. It may be that the days are numbered for marine steam turbine propulsion systems, even though all but sixteen of the orders for new LNG carriers at the end of 2004 were for steam turbine propelled ships.
Nuclear-powered steam turbines
In these vessels, the nuclear reactor heats water to create steam to drive the turbines. Due to low prices of diesel oil, nuclear propulsion is rare except in some Navy and specialist vessels such as icebreakers. In large aircraft carriers, the space formerly used for ship's bunkerage could be used instead to bunker aviation fuel. In submarines, the ability to run submerged at high speed and in relative quiet for long periods holds obvious advantages. A few cruisers have also employed nuclear power; as of 2006, the only ones remaining in service are the Russian Kirov class. An example of a non-military ship with nuclear marine propulsion is the Arktika class icebreaker with 75,000 shaft horsepower (55,930 kW). Commercial experiments such as the NS Savannah have so far proved uneconomical compared with conventional propulsion.
In recent times, there is some renewed interest in commercial nuclear shipping. Nuclear-powered cargo ships could lower costs associated with carbon dioxide emissions and travel at higher cruise speeds than conventional diesel powered vessels.
Reciprocating diesel engines
Most modern ships use a reciprocating diesel engine as their prime mover, due to their operating simplicity, robustness and fuel economy compared to most other prime mover mechanisms. The rotating crankshaft can be directly coupled to the propeller with slow speed engines, via a reduction gearbox for medium and high speed engines, or via an alternator and electric motor in diesel-electric vessels. The rotation of the crankshaft is connected to the camshaft or a hydraulic pump on an intelligent diesel.
The reciprocating marine diesel engine first came into use in 1903 when the diesel electric rivertanker Vandal was put into service by Branobel. Diesel engines soon offered greater efficiency than the steam turbine, but for many years had an inferior power-to-space ratio. The advent of turbocharging however hastened their adoption, by permitting greater power densities.
Diesel engines today are broadly classified according to
- Their operating cycle: two-stroke engine or four-stroke engine
- Their construction: crosshead, trunk, or opposed piston
- Their speed
- Slow speed: any engine with a maximum operating speed up to 300 revolutions per minute (rpm), although most large two-stroke slow speed diesel engines operate below 120 rpm. Some very long stroke engines have a maximum speed of around 80 rpm. The largest, most powerful engines in the world are slow speed, two stroke, crosshead diesels.
- Medium speed: any engine with a maximum operating speed in the range 300-900 rpm. Many modern four-stroke medium speed diesel engines have a maximum operating speed of around 500 rpm.
- High speed: any engine with a maximum operating speed above 900 rpm.
Most modern larger merchant ships use either slow speed, two stroke, crosshead engines, or medium speed, four stroke, trunk engines. Some smaller vessels may use high speed diesel engines.
The size of the different types of engines is an important factor in selecting what will be installed in a new ship. Slow speed two-stroke engines are much taller, but the footprint required is smaller than that needed for equivalently rated four-stroke medium speed diesel engines. As space above the waterline is at a premium in passenger ships and ferries (especially ones with a car deck), these ships tend to use multiple medium speed engines resulting in a longer, lower engine room than that needed for two-stroke diesel engines. Multiple engine installations also give redundancy in the event of mechanical failure of one or more engines, and the potential for greater efficiency over a wider range of operating conditions.
As modern ships' propellers are at their most efficient at the operating speed of most slow speed diesel engines, ships with these engines do not generally need gearboxes. Usually such propulsion systems consist of either one or two propeller shafts each with its own direct drive engine. Ships propelled by medium or high speed diesel engines may have one or two (sometimes more) propellers, commonly with one or more engines driving each propeller shaft through a gearbox. Where more than one engine is geared to a single shaft, each engine will most likely drive through a clutch, allowing engines not being used to be disconnected from the gearbox while others keep running. This arrangement lets maintenance be carried out while under way, even far from port.
Shipping companies are required to comply with the International Maritime Organization (IMO) and the International Convention for the Prevention of Pollution from Ships (MARPOL) emissions rules. Dual fuel engines are fueled by either marine grade diesel, heavy fuel oil, or liquefied natural gas (LNG). A Marine LNG Engine has multiple fuel options, allowing vessels to transit without relying on one type of fuel. Studies show that LNG is the most efficient of fuels, although limited access to LNG fueling stations limits the production of such engines. Vessels providing services in the LNG industry have been retrofitted with dual-fuel engines, and have been proved to be extremely effective. Benefits of dual-fuel engines include fuel and operational flexibility, high efficiency, low emissions, and operational cost advantages. Liquefied natural gas engines offer the marine transportation industry with an environmentally friendly alternative to provide power to vessels. In 2010, STX Finland and Viking Line signed an agreement to begin construction on what would be the largest environmentally friendly cruise ferry. Construction of NB 1376 will be completed in 2013. According to Viking Line, vessel NB 1376 will primarily be fueled by liquefied natural gas. Vessel NB 1376 nitrogen oxide emissions will be almost zero, and sulphur oxide emissions will be at least 80% below the International Maritime Organization’s (IMO) standards. Company profits from tax cuts and operational cost advantages has led to the gradual growth of LNG fuel use in engines. 
Many warships built since the 1960s have used gas turbines for propulsion, as have a few passenger ships, like the jetfoil. Gas turbines are commonly used in combination with other types of engine. Most recently, the Queen Mary 2 has had gas turbines installed in addition to diesel engines. Because of their poor thermal efficiency at low power (cruising) output, it is common for ships using them to have diesel engines for cruising, with gas turbines reserved for when higher speeds are needed however, in the case of passenger ships the main reason for installing gas turbines has been to allow a reduction of emissions in sensitive environmental areas or while in port. Some warships, and a few modern cruise ships have also used steam turbines to improve the efficiency of their gas turbines in a combined cycle, where waste heat from a gas turbine exhaust is utilized to boil water and create steam for driving a steam turbine. In such combined cycles, thermal efficiency can be the same or slightly greater than that of diesel engines alone; however, the grade of fuel needed for these gas turbines is far more costly than that needed for the diesel engines, so the running costs are still higher.
Since the late 1980s, Swedish shipbuilder Kockums has built a number of successful Stirling engine powered submarines. The submarines store compressed oxygen to allow more efficient and cleaner external fuel combustion when submerged, providing heat for the Stirling engine's operation. The engines are currently used on submarines of the Gotland and Södermanland classes. and the Japanese Sōryū-class submarine. These are the first submarines to feature Stirling air-independent propulsion (AIP), which extends the underwater endurance from a few days to several weeks.
The heat sink of a Stirling engine is typically the ambient air temperature. In the case of medium to high power Stirling engines, a radiator is generally required to transfer the heat from the engine to the ambient air. Stirling marine engines have the advantage of using the ambient temperature water. Placing the cooling radiator section in seawater rather than ambient air allows for the radiator to be smaller. The engine's cooling water may be used directly or indirectly for heating and cooling purposes of the ship. The Stirling engine has potential for surface-ship propulsion, as the engine's larger physical size is less of a concern.
Marine propellers are also known as "screws". There are many variations of marine screw systems, including twin, contra-rotating, controllable-pitch, and nozzle-style screws. While smaller vessels tend to have a single screw, even very large ships such as tankers, container ships and bulk carriers may have single screws for reasons of fuel efficiency. Other vessels may have twin, triple or quadruple screws. Power is transmitted from the engine to the screw by way of a propeller shaft, which may or may not be connected to a gearbox.
The paddle wheel is a large wheel, generally built of a steel framework, upon the outer edge of which are fitted numerous paddle blades (called floats or buckets). The bottom quarter or so of the wheel travels underwater. Rotation of the paddle wheel produces thrust, forward or backward as required. More advanced paddle wheel designs have featured feathering methods that keep each paddle blade oriented closer to vertical while it is in the water; this increases efficiency. The upper part of a paddle wheel is normally enclosed in a paddlebox to minimise splashing.
Paddle wheels have been superseded by screws, which are a much more efficient form of propulsion. Nevertheless, paddle wheels have two advantages over screws, making them suitable for vessels in shallow rivers and constrained waters: first, they are less likely to be clogged by obstacles and debris; and secondly, when contra-rotating, they allow the vessel to spin around its own vertical axis. Some vessels had a single screw in addition to two paddle wheels, to gain the advantages of both types of propulsion.
The purpose of sails is to use wind energy to propel the vessel, sled, board, vehicle or rotor.
An early uncommon means of boat propulsion was the water caterpillar. This moved a series of paddles on chains along the bottom of the boat to propel it over the water and preceded the development of tracked vehicles. The first water caterpillar was developed by Desblancs in 1782 and propelled by a steam engine. In the United States the first water caterpillar was patented in 1839 by William Leavenworth of New York.
An internal combustion engine (ICE) is a heat engine where the combustion of a fuel occurs with an oxidizer (usually air) in a combustion chamber that is an integral part of the working fluid flow circuit. In an internal combustion engine the expansion of the high-temperature and high-pressure gases produced by combustion apply direct force to some component of the engine. The force is applied typically to pistons, turbine blades, or a nozzle. This force moves the component over a distance, transforming chemical energy into useful mechanical energy. The first commercially successful internal combustion engine was created by Étienne Lenoir around 1859 and the first modern internal combustion engine was created in 1864 by Siegfried Marcus.
The term internal combustion engine usually refers to an engine in which combustion is intermittent, such as the more familiar four-stroke and two-stroke piston engines, along with variants, such as the six-stroke piston engine and the Wankel rotary engine. A second class of internal combustion engines use continuous combustion: gas turbines, jet engines and most rocket engines, each of which are internal combustion engines on the same principle as previously described. Firearms are also a form of internal combustion engine.
Internal combustion engines are quite different from external combustion engines, such as steam or Stirling engines, in which the energy is delivered to a working fluid not consisting of, mixed with, or contaminated by combustion products. Working fluids can be air, hot water, pressurized water or even liquid sodium, heated in a boiler. ICEs are usually powered by energy-dense fuels such as gasoline or diesel, liquids derived from fossil fuels. While there are many stationary applications, most ICEs are used in mobile applications and are the dominant power supply for vehicles such as cars, aircraft, and boats.
Typically an ICE is fed with fossil fuels like natural gas or petroleum products such as gasoline, diesel fuel or fuel oil. There's a growing usage of renewable fuels like biodiesel for compression ignition engines and bioethanol for spark ignition engines. Hydrogen is sometimes used, and can be made from either fossil fuels or renewable energy.
At one time, the word engine (from Latin, via Old French, ingenium, "ability") meant any piece of machinery — a sense that persists in expressions such as siege engine. A "motor" (from Latin motor, "mover") is any machine that produces mechanical power. Traditionally, electric motors are not referred to as "Engines"; however, combustion engines are often referred to as "motors." (An electric engine refers to a locomotive operated by electricity.)
Reciprocating piston engines are by far the most common power source for land and water vehicles, including automobiles, motorcycles, ships and to a lesser extent, locomotives (some are electrical but most use Diesel engines). Wankel engines are used in some automobiles and motorcycles.
Where very high power-to-weight ratios are required, internal combustion engines appear in the form of combustion turbines. Powered aircraft typically uses an ICE which may be a reciprocating engine. Airplanes can instead use jet engines and helicopters can instead employ turboshafts; both of which are types of turbines. In addition to providing propulsion, airliners employ a separate ICE as an auxiliary power unit.
ICEs drive some of the large electric generators that power electrical grids. They are found in the form of combustion turbines in combined cycle power plants with a typical electrical output in the range of 100 MW to 1 GW. The high temperature exhaust is used to boil and superheat water to run a steam turbine. Thus, the efficiency is higher because more energy is extracted from the fuel than what could be extracted by the combustion turbine alone. In combined cycle power plants efficiencies in the range of 50 % to 60 % are typical. In a smaller scale Diesel generators are used for backup power and for providing electrical power to areas not connected to an electric grid.
Small engines (usually 2‐stroke gasoline engines) are a common power source for lawnmowers, string trimmers, chain saws, leafblowers, pressure washers, snowmobiles, jet skis, outboard motors, mopeds, and motorcycles.
StructureThe base of a reciprocating internal combustion engine is the engine block, which is typically made of cast iron or aluminium. The engine block contains the cylinders. In engines with more than one cylinder they are usually arranged either in 1 row (straight engine) or 2 rows (boxer engine or V engine); 3 rows are occasionally used (W engine) in contemporary engines, and other engine configurations are possible and have been used. Single cylinder engines are common for motorcycles and in small engines of machinery. Water-cooled engines contain passages in the engine block where cooling fluid circulates (the water jacket). Some small engines are air-cooled, and instead of having a water jacket the cylinder block has fins protruding away from it to cool by directly transferring heat to the air. The cylinder walls are usually finished by honing to obtain a cross hatch, which is better able to retain the oil. A too rough surface would quickly harm the engine by excessive wear on the piston.
The pistons are short cylindrical parts which seal one end of the cylinder from the high pressure of the compressed air and combustion products and slide continuously within it while the engine is in operation. The top wall of the piston is termed its crown and is typically flat or concave. Some two-stroke engines use pistons with a deflector head. Pistons are open at the bottom and hollow except for an integral reinforcement structure (the piston web). When an engine is working the gas pressure in the combustion chamber exerts a force on the piston crown which is transferred through its web to a gudgeon pin. Each piston has rings fitted around its circumference that mostly prevent the gases from leaking into the crankcase or the oil into the combustion chamber. A ventilation system drives the small amount of gas that escape past the pistons during normal operation (the blow-by gases) out of the crankcase so that it does not accumulate contaminating the oil and creating corrosion. In two-stroke gasoline engines the crankcase is part of the air–fuel path and due to the continuous flow of it they do not need a separate crankcase ventilation system.
The cylinder head is attached to the engine block by numerous bolts or studs. It has several functions. The cylinder head seals the cylinders on the side opposite to the pistons; it contains short ducts (the ports) for intake and exhaust and the associated intake valves that open to let the cylinder be filled with fresh air and exhaust valves that open to allow the combustion gases to escape. However, 2-stroke crankcase scavenged engines connect the gas ports directly to the cylinder wall without poppet valves; the piston controls their opening and occlusion instead. The cylinder head also holds the spark plug in the case of spark ignition engines and the injector for engines that use direct injection. All CI engines use fuel injection, usually direct injection but some engines instead use indirect injection. SI engines can use a carburetor or fuel injection as port injection or direct injection. Most SI engines have a single spark plug per cylinder but some have 2. A head gasket prevents the gas from leaking between the cylinder head and the engine block. The opening and closing of the valves is controlled by one or several camshafts and springs—or in some engines—a desmodromic mechanism that uses no springs. The camshaft may press directly the stem of the valve or may act upon a rocker arm, again, either directly or through a pushrod.
The crankcase is sealed at the bottom with a sump that collects the falling oil during normal operation to be cycled again. The cavity created between the cylinder block and the sump houses a crankshaft that converts the reciprocating motion of the pistons to rotational motion. The crankshaft is held in place relative to the engine block by main bearings, which allow it to rotate. Bulkheads in the crankcase form a half of every main bearing; the other half is a detachable cap. In some cases a single main bearing deck is used rather than several smaller caps. A connecting rod is connected to offset sections of the crankshaft (the crankpins) in one end and to the piston in the other end through the gudgeon pin and thus transfers the force and translates the reciprocating motion of the pistons to the circular motion of the crankshaft. The end of the connecting rod attached to the gudgeon pin is called its small end, and the other end, where it is connected to the crankshaft, the big end. The big end has a detachable half to allow assembly around the crankshaft. It is kept together to the connecting rod by removable bolts.
The cylinder head has attached an intake manifold and an exhaust manifold to the corresponding ports. The intake manifold connects to the air filter directly, or to a carburetor when one is present, which is then connected to the air filter. It distributes the air incoming from these devices to the individual cylinders. The exhaust manifold is the first component in the exhaust system. It collects the exhaust gases from the cylinders and drives it to the following component in the path. The exhaust system of an ICE may also include a catalytic converter and muffler. The final section in the path of the exhaust gases is the tailpipe.
The top dead center (TDC) of a piston is the position where it is nearest to the valves; bottom dead center (BDC) is the opposite position where it is furthest from them. A stroke is the movement of a piston from TDC to BDC or vice versa together with the associated process. While an engine is in operation the crankshaft rotates continuously at a nearly constant speed. In a 4-stroke ICE each piston experiences 2 strokes per crankshaft revolution in the following order. Starting the description at TDC, these are:
- Intake, induction or suction: The intake valves are open as a result of the cam lobe pressing down on the valve stem. The piston moves downward increasing the volume of the combustion chamber and allowing air to enter in the case of a CI engine or an air fuel mix in the case of SI engines that do not use direct injection. The air or air-fuel mixture is called the charge in any case.
- Compression: In this stroke, both valves are closed and the piston moves upward reducing the combustion chamber volume which reaches its minimum when the piston is at TDC. The piston performs work on the charge as it is being compressed; as a result its pressure, temperature and density increase; an approximation to this behavior is provided by the ideal gas law. Just before the piston reaches TDC, ignition begins. In the case of a SI engine, the spark plug receives a high voltage pulse that generates the spark which gives it its name and ignites the charge. In the case of a CI engine the fuel injector quickly injects fuel into the combustion chamber as a spray; the fuel ignites due to the high temperature.
- Power or working stroke: The pressure of the combustion gases pushes the piston downward, generating more work than it required to compress the charge. Complementary to the compression stroke, the combustion gases expand and as a result their temperature, pressure and density decreases. When the piston is near to BDC the exhaust valve opens. The combustion gases expand irreversibly due to the leftover pressure—in excess of back pressure, the gauge pressure on the exhaust port—; this is called the blowdown.
- Exhaust: The exhaust valve remains open while the piston moves upward expelling the combustion gases. For naturally aspirated engines a small part of the combustion gases may remain in the cylinder during normal operation because the piston does not close the combustion chamber completely; these gases dissolve in the next charge. At the end of this stroke, the exhaust valve closes, the intake valve opens, and the sequence repeats in the next cycle. The intake valve may open before the exhaust valve closes to allow better scavenging.
The defining characteristic of this kind of engine is that each piston completes a cycle every crankshaft revolution. The 4 process of intake, compression, power and exhaust take place in only 2 strokes so that it is not possible to dedicate a stroke exclusively for each of them. Starting at TDC the cycle consist of:
- Power: While the piston is descending the combustion gases perform work on it—as in a 4-stroke engine—. The same thermodynamical considerations about the expansion apply.
- Scavenging: Around 75° of crankshaft rotation before BDC the exhaust valve or port opens, and blowdown occurs. Shortly thereafter the intake valve or transfer port opens. The incoming charge displaces the remaining combustion gases to the exhaust system and a part of the charge may enter the exhaust system as well. The piston reaches BDC and reverses direction. After the piston has traveled a short distance upwards into the cylinder the exhaust valve or port closes; shortly the intake valve or transfer port closes as well.
- Compression: With both intake and exhaust closed the piston continues moving upwards compressing the charge and performing a work on it. As in the case of a 4-stroke engine, ignition starts just before the piston reaches TDC and the same consideration on the thermodynamics of the compression on the charge.
While a 4-stroke engine uses the piston as a positive displacement pump to accomplish scavenging taking 2 of the 4 strokes, a 2-stroke engine uses the last part of the power stroke and the first part of the compression stroke for combined intake and exhaust. The work required to displace the charge and exhaust gases comes from either the crankcase or a separate blower. For scavenging, expulsion of burned gas and entry of fresh mix, two main approaches are described: 'Loop scavenging', and 'Uniflow scavenging', SAE news published in the 2010s that 'Loop Scavenging' is better under any circumstance than 'Uniflow Scavenging'.
Some SI engines are crankcase scavenged and do not use poppet valves. Instead the crankcase and the part of the cylinder below the piston is used as a pump. The intake port is connected to the crankcase through a reed valve or a rotary disk valve driven by the engine. For each cylinder a transfer port connects in one end to the crankcase and in the other end to the cylinder wall. The exhaust port is connected directly to the cylinder wall. The transfer and exhaust port are opened and closed by the piston. The reed valve opens when the crankcase pressure is slightly below intake pressure, to let it be filled with a new charge; this happens when the piston is moving upwards. When the piston is moving downwards the pressure in the crankcase increases and the reed valve closes promptly, then the charge in the crankcase is compressed. When the piston is moving upwards, it uncovers the exhaust port and the transfer port and the higher pressure of the charge in the crankcase makes it enter the cylinder through the transfer port, blowing the exhaust gases. Lubrication is accomplished by adding 2-stroke oil to the fuel in small ratios. Petroil refers to the mix of gasoline with the aforesaid oil. This kind of 2-stroke engines has a lower efficiency than comparable 4-strokes engines and release a more polluting exhaust gases for the following conditions:
- They use a total-loss lubrication system: all the lubricating oil is eventually burned along with the fuel.
- There are conflicting requirements for scavenging: On one side, enough fresh charge needs to be introduced in each cycle to displace almost all the combustion gases but introducing too much of it means that a part of it gets in the exhaust.
- They must use the transfer port(s) as a carefully designed and placed nozzle so that a gas current is created in a way that it sweeps the whole cylinder before reaching the exhaust port so as to expel the combustion gases, but minimize the amount of charge exhausted. 4-stroke engines have the benefit of forcibly expelling almost all of the combustion gases because during exhaust the combustion chamber is reduced to its minimum volume. In crankcase scavenged 2-stroke engines, exhaust and intake are performed mostly simultaneously and with the combustion chamber at its maximum volume.
The main advantage of 2-stroke engines of this type is mechanical simplicity and a higher power-to-weight ratio than their 4-stroke counterparts. Despite having twice as many power strokes per cycle, less than twice the power of a comparable 4-stroke engine is attainable in practice.
Using a separate blower avoids many of the shortcomings of crankcase scavenging, at the expense of increased complexity which means a higher cost and an increase in maintenance requirement. An engine of this type uses ports or valves for intake and valves for exhaust, except opposed piston engines, which may also use ports for exhaust. The blower is usually of the Roots-type but other types have been used too. This design is commonplace in CI engines, and has been occasionally used in SI engines.
CI engines that use a blower typically use uniflow scavenging. In this design the cylinder wall contains several intake ports placed uniformly spaced along the circumference just above the position that the piston crown reaches when at BDC. An exhaust valve or several like that of 4-stroke engines is used. The final part of the intake manifold is an air sleeve which feeds the intake ports. The intake ports are placed at an horizontal angle to the cylinder wall (I.e: they are in plane of the piston crown) to give a swirl to the incoming charge to improve combustion. The largest reciprocating IC are low speed CI engines of this type; they are used for marine propulsion (see marine diesel engine) or electric power generation and achieve the highest thermal efficiencies among internal combustion engines of any kind. Some Diesel-electric locomotive engines operate on the 2-stroke cycle. The most powerful of them have a brake power of around 4.5 MW or 6,000 HP. The EMD SD90MAC class of locomotives use a 2-stroke engine. The comparable class GE AC6000CW whose prime mover has almost the same brake power uses a 4-stroke engine.
An example of this type of engine is the Wärtsilä-Sulzer RTA96-C turbocharged 2-stroke Diesel, used in large container ships. It is the most efficient and powerful internal combustion engine in the world with a thermal efficiency over 50 %. For comparison, the most efficient small four-stroke engines are around 43 % thermally-efficient (SAE 900648); size is an advantage for efficiency due to the increase in the ratio of volume to surface area.
See the external links for a in-cylinder combustion video in a 2-stroke, optically accessible motorcycle engine..
A two-stroke, or two-cycle, engine is a type of internal combustion engine which completes a power cycle with two strokes (up and down movements) of the piston during only one crankshaft revolution. This is in contrast to a "four-stroke engine", which requires four strokes of the piston to complete a power cycle. In a two-stroke engine, the end of the combustion stroke and the beginning of the compression stroke happen simultaneously, with the intake and exhaust (or scavenging) functions occurring at the same time.
Two-stroke engines often have a high power-to-weight ratio, usually in a narrow range of rotational speeds called the "power band". Compared to four-stroke engines, two-stroke engines have a greatly reduced number of moving parts, and so can be more compact and significantly lighter.
The first commercial two-stroke engine involving in-cylinder compression is attributed to Scottish engineer Dugald Clerk, who patented his design in 1881. However, unlike most later two-stroke engines, his had a separate charging cylinder. The crankcase-scavenged engine, employing the area below the piston as a charging pump, is generally credited to Englishman Joseph Day. The first truly practical two-stroke engine is attributed to Yorkshireman Alfred Angas Scott, who started producing twin-cylinder water-cooled motorcycles in 1908.
Gasoline (spark ignition) versions are particularly useful in lightweight or portable applications such as chainsaws and motorcycles. Despite that, they are also used in diesel compression ignition engines operating in large, weight-insensitive applications, such as marine propulsion, railway locomotives and electricity generation. In a two-stroke engine, the heat transfer from the engine to the cooling system is less than in a four-stroke, which means that two-stroke engines are more efficient. However, crankcase-compression two-stroke engines, such as the common small gasoline-powered engines, create more exhaust emissions than four-stroke engines because their petroil lubrication mixture is also burned in the engine, due to the engine's total-loss oiling system.
- 1 Applications
- 2 Different two-stroke design types
- 2.1 Piston-controlled inlet port
- 2.2 Reed inlet valve
- 2.3 Rotary inlet valve
- 2.4 Cross-flow-scavenged
- 2.5 Loop-scavenged
- 2.6 Uniflow-scavenged
- 2.7 Stepped piston engine
- 3 Power valve systems
- 4 Direct injection
- 5 Two-stroke diesel engine
- 6 Lubrication
- 7 Two-stroke reversibility
- 8 See also
- 9 Notes
- 10 References
- 11 External links
The two-stroke petrol (gasoline) engine was very popular throughout the 19th-20th century in motorcycles and small-engined devices, such as chainsaws and outboard motors. They were also used in some cars, a few tractors and many ships. Part of their appeal was their simple design (and resulting low cost) and often high power-to-weight ratio. The lower cost to rebuild and maintain made the two stroke engine very popular, until for the USA their EPA mandated more stringent emission controls in 1978 (taking effect in 1980) and in 2004 (taking effect in 2005 and 2010). The industry largely responded by switching to four-stroke petrol engines, which emit less pollution. Most small designs use petroil (two-stroke oil)) lubrication, with the oil being burned in the combustion chamber, causing "blue smoke" and other types of exhaust pollution. This is a major reason why two-stroke engines were replaced by four-stroke engines in many applications..
Simple two-stroke petrol engines continue to be commonly used in high-power, handheld applications such as string trimmers and chainsaws. The light weight, and light-weight spinning parts give important operational and safety advantages. For example, a four-stroke engine to power a chainsaw operating in any position would be much more expensive and complex than a two-stroke engine that uses a gasoline-oil mixture.
These engines are preferred for small, portable, or specialized machine applications such as outboard motors, high-performance, small-capacity motorcycles, mopeds, underbones, scooters, tuk-tuks, snowmobiles, karts, ultralights, model airplanes (and other model vehicles), lawnmowers, chain saws, weed-wackers and dirt bikes.
The two-stroke cycle is also used in many diesel engines, most notably large industrial and marine engines, as well as some trucks and heavy machinery.
A number of mainstream automobile manufacturers have used two-stroke engines in the past, including the Swedish Saab and German manufacturers DKW and Auto-Union. The Japanese manufacturer Suzuki did the same in the 1970s. Production of two-stroke cars ended in the 1980s in the West, but Eastern Bloc countries continued until around 1991, with the Trabant and Wartburg in East Germany. Lotus of Norfolk, UK, has a prototype direct-injection two-stroke engine intended for alcohol fuels called the Omnivore which it is demonstrating in a version of the Exige. As this uses direct fuel injection, there are dramatic decreases in emission levels and increases in fuel efficiency.
Different two-stroke design types
Although the principles remain the same, the mechanical details of various two-stroke engines differ depending on the type. The design types vary according to the method of introducing the charge to the cylinder, the method of scavenging the cylinder (exchanging burnt exhaust for fresh mixture) and the method of exhausting the cylinder.
Piston-controlled inlet port
Piston port is the simplest of the designs and the most common in small two-stroke engines. All functions are controlled solely by the piston covering and uncovering the ports as it moves up and down in the cylinder. In the 1970s, Yamaha worked out some basic principles for this system. They found that, in general, widening an exhaust port increases the power by the same amount as raising the port, but the power band does not narrow as it does when the port is raised. However, there is a mechanical limit to the width of a single exhaust port, at about 62% of the bore diameter for reasonable ring life. Beyond this, the rings will bulge into the exhaust port and wear quickly. A maximum is 70% of bore width is possible in racing engines, where rings are changed every few races. Intake duration is between 120 and 160 degrees. Transfer port time is set at a minimum of 26 degrees. The strong low pressure pulse of a racing two-stroke expansion chamber can drop the pressure to -7 PSI when the piston is at bottom dead center, and the transfer ports nearly wide open. One of the reasons for high fuel consumption in 2-strokes is that some of the incoming pressurized fuel/air mixture is forced across the top of the piston, where it has a cooling action, and straight out the exhaust pipe. An expansion chamber with a strong reverse pulse will stop this out-going flow. A fundamental difference from typical four-stroke engines is that the two-stroke's crankcase is sealed and forms part of the induction process in gasoline and hot bulb engines. Diesel two-strokes often add a Roots blower or piston pump for scavenging.
Reed inlet valve
The reed valve is a simple but highly effective form of check valve commonly fitted in the intake tract of the piston-controlled port. They allow asymmetric intake of the fuel charge, improving power and economy, while widening the power band. They are widely used in motorcycle, ATV and marine outboard engines.
Rotary inlet valve
The intake pathway is opened and closed by a rotating member. A familiar type sometimes seen on small motorcycles is a slotted disk attached to the crankshaft which covers and uncovers an opening in the end of the crankcase, allowing charge to enter during one portion of the cycle.
Another form of rotary inlet valve used on two-stroke engines employs two cylindrical members with suitable cutouts arranged to rotate one within the other - the inlet pipe having passage to the crankcase only when the two cutouts coincide. The crankshaft itself may form one of the members, as in most glow plug model engines. In another embodiment, the crank disc is arranged to be a close-clearance fit in the crankcase, and is provided with a cutout which lines up with an inlet passage in the crankcase wall at the appropriate time, as in the Vespa motor scooter.
The advantage of a rotary valve is it enables the two-stroke engine's intake timing to be asymmetrical, which is not possible with piston port type engines. The piston port type engine's intake timing opens and closes before and after top dead center at the same crank angle, making it symmetrical, whereas the rotary valve allows the opening to begin earlier and close earlier.
Rotary valve engines can be tailored to deliver power over a wider speed range or higher power over a narrower speed range than either piston port or reed valve engine. Where a portion of the rotary valve is a portion of the crankcase itself, it is particularly important that no wear is allowed to take place.
In a cross-flow engine, the transfer and exhaust ports are on opposite sides of the cylinder, and a deflector on the top of the piston directs the fresh intake charge into the upper part of the cylinder, pushing the residual exhaust gas down the other side of the deflector and out the exhaust port. The deflector increases the piston's weight and exposed surface area, affecting piston cooling and also making it difficult to achieve an efficient combustion chamber shape. This design has been superseded since the 1960s by the loop scavenging method (below), especially for motorbikes, although for smaller or slower engines, such as lawn mowers, the cross-flow-scavenged design can be an acceptable approach.
This method of scavenging uses carefully shaped and positioned transfer ports to direct the flow of fresh mixture toward the combustion chamber as it enters the cylinder. The fuel/air mixture strikes the cylinder head, then follows the curvature of the combustion chamber, and then is deflected downward.
This not only prevents the fuel/air mixture from traveling directly out the exhaust port, but also creates a swirling turbulence which improves combustion efficiency, power and economy. Usually, a piston deflector is not required, so this approach has a distinct advantage over the cross-flow scheme (above).
Often referred to as "Schnuerle" (or "Schnürle") loop scavenging after the German inventor of an early form in the mid-1920s, it became widely adopted in that country during the 1930s and spread further afield after World War II.
Loop scavenging is the most common type of fuel/air mixture transfer used on modern two-stroke engines. Suzuki was one of the first manufacturers outside of Europe to adopt loop-scavenged two-stroke engines. This operational feature was used in conjunction with the expansion chamber exhaust developed by German motorcycle manufacturer, MZ and Walter Kaaden.
Loop scavenging, disc valves and expansion chambers worked in a highly coordinated way to significantly increase the power output of two-stroke engines, particularly from the Japanese manufacturers Suzuki, Yamaha and Kawasaki. Suzuki and Yamaha enjoyed success in grand Prix motorcycle racing in the 1960s due in no small way to the increased power afforded by loop scavenging.
An additional benefit of loop scavenging was the piston could be made nearly flat or slightly dome shaped, which allowed the piston to be appreciably lighter and stronger, and consequently to tolerate higher engine speeds. The "flat top" piston also has better thermal properties and is less prone to uneven heating, expansion, piston seizures, dimensional changes and compression losses.
SAAB built 750 and 850 cc 3-cylinder engines based on a DKW design that proved reasonably successful employing loop charging. The original SAAB 92 had a two-cylinder engine of comparatively low efficiency. At cruising speed, reflected wave exhaust port blocking occurred at too low a frequency. Using the asymmetric three-port exhaust manifold employed in the identical DKW engine improved fuel economy.
The 750 cc standard engine produced 36 to 42 hp, depending on the model year. The Monte Carlo Rally variant, 750 cc (with a filled crankshaft for higher base compression), generated 65 hp. An 850 cc version was available in the 1966 SAAB Sport (a standard trim model in comparison to the deluxe trim of the Monte Carlo). Base compression comprises a portion of the overall compression ratio of a two-stroke engine. Work published at SAE in 2012 points that loop scavenging is under every circumstance more efficient than cross-flow scavenging.
In a uniflow engine, the mixture, or "charge air" in the case of a diesel, enters at one end of the cylinder controlled by the piston and the exhaust exits at the other end controlled by an exhaust valve or piston. The scavenging gas-flow is therefore in one direction only, hence the name uniflow. The valved arrangement is common in on-road, off-road and stationary two-stroke engines (Detroit Diesel), certain small marine two-stroke engines (Gray Marine), certain railroad two-stroke diesel locomotives (Electro-Motive Diesel) and large marine two-stroke main propulsion engines (Wärtsilä). Ported types are represented by the opposed piston design in which there are two pistons in each cylinder, working in opposite directions such as the Junkers Jumo and Napier Deltic. The once-popular split-single design falls into this class, being effectively a folded uniflow. With advanced angle exhaust timing, uniflow engines can be supercharged with a crankshaft-driven (piston or Roots) blower.
Stepped piston engine
The piston of this engine is "top-hat" shaped; the upper section forms the regular cylinder, and the lower section performs a scavenging function. The units run in pairs, with the lower half of one piston charging an adjacent combustion chamber.
This system is still partially dependent on total loss lubrication (for the upper part of the piston), the other parts being sump lubricated with cleanliness and reliability benefits. The piston weight is only about 20% heavier than a loop-scavenged piston because skirt thicknesses can be less. Bernard Hooper Engineering Ltd. (BHE) is one of the more recent engine developers using this approach.
Power valve systems
Many modern two-stroke engines employ a power valve system. The valves are normally in or around the exhaust ports. They work in one of two ways: either they alter the exhaust port by closing off the top part of the port, which alters port timing, such as Ski-doo R.A.V.E, Yamaha YPVS, Honda RC-Valve, Kawasaki K.I.P.S., Cagiva C.T.S. or Suzuki AETC systems, or by altering the volume of the exhaust, which changes the resonant frequency of the expansion chamber, such as the Suzuki SAEC and Honda V-TACS system. The result is an engine with better low-speed power without sacrificing high-speed power. However, as power valves are in the hot gas flow they need regular maintenance to perform well.
Direct injection has considerable advantages in two-stroke engines, eliminating some of the waste and pollution caused by carbureted two-strokes where a proportion of the fuel/air mixture entering the cylinder goes directly out, unburned, through the exhaust port. Two systems are in use, low-pressure air-assisted injection, and high pressure injection.
Since the fuel does not pass through the crankcase, a separate source of lubrication is needed.
Two-stroke diesel engine
Diesel engines rely solely on the heat of compression for ignition. In the case of Schnuerle ported and loop-scavenged engines, intake and exhaust happens via piston-controlled ports. A uniflow diesel engine takes in air via scavenge ports, and exhaust gases exit through an overhead poppet valve. Two-stroke diesels are all scavenged by forced induction. Some designs use a mechanically driven Roots blower, whilst marine diesel engines normally use exhaust-driven turbochargers, with electrically driven auxiliary blowers for low-speed operation when exhaust turbochargers are unable to deliver enough air.
Marine two-stroke diesel engines directly coupled to the propeller are able to start and run in either direction as required. The fuel injection and valve timing is mechanically readjusted by using a different set of cams on the camshaft. Thus, the engine can be run in reverse to move the vessel backwards.
Most small petrol two-stroke engines cannot be lubricated by oil contained in their crankcase and sump, since the crankcase is already being used to pump fuel-air mixture into the cylinder. Traditionally, the moving parts (both rotating crankshaft and sliding piston) were lubricated by a premixed fuel-oil mixture (at a ratio between 16:1 and 100:1). As late as the 1970s, petrol stations would often have a separate pump to deliver such a premix fuel to motorcycles. Even then, in many cases, the rider would carry a bottle of their own two-stroke oil. Taking care to close the fuel-tap first, he or she would meter in a little oil (using the cap of the bottle) and then put in the petrol, this action mixing the two liquids. Two-stroke oils which became available worldwide in the 1970s are specifically designed to mix with petrol and be burnt in the combustion chamber without leaving undue unburnt oil or ash. This led to a marked reduction in spark plug fouling, which had been a factor in two-stroke engines.
All two-stroke engines running on a petrol/oil mix will suffer oil starvation if forced to rotate at speed with the throttle closed, e.g. motorcycles descending long hills and perhaps when decelerating gradually from high speed by changing down through the gears. Two-stroke cars (such as those that were popular in Eastern Europe in the mid-20th century) were in particular danger and were usually fitted with freewheel mechanisms in the powertrain, allowing the engine to idle when the throttle was closed, requiring the use of the brakes in all slowing situations.
Large two-stroke engines, including diesels, normally use a sump lubrication system similar to four-stroke engines. The cylinder must still be pressurized, but this is not done from the crankcase, but by an ancillary Roots-type blower or a specialized turbocharger (usually a turbo-compressor system) which has a "locked" compressor for starting (and during which it is powered by the engine's crankshaft), but which is "unlocked" for running (and during which it is powered by the engine's exhaust gases flowing through the turbine).
For the purpose of this discussion, it is convenient to think in motorcycle terms, where the exhaust pipe faces into the cooling air stream, and the crankshaft commonly spins in the same axis and direction as do the wheels i.e. "forward". Some of the considerations discussed here apply to four-stroke engines (which cannot reverse their direction of rotation without considerable modification), almost all of which spin forward, too.
Regular gasoline two-stroke engines will run backwards for short periods and under light load with little problem, and this has been used to provide a reversing facility in microcars, such as the Messerschmitt KR200, that lacked reverse gearing. Where the vehicle has electric starting, the motor will be turned off and restarted backwards by turning the key in the opposite direction. Two-stroke golf carts have used a similar kind of system. Traditional flywheel magnetos (using contact-breaker points, but no external coil) worked equally well in reverse because the cam controlling the points is symmetrical, breaking contact before top dead center (TDC) equally well whether running forwards or backwards. Reed-valve engines will run backwards just as well as piston-controlled porting, though rotary valve engines have asymmetrical inlet timing and will not run very well.
There are serious disadvantages to running many engines backwards under load for any length of time, and some of these reasons are general, applying equally to both two-stroke and four-stroke engines. This disadvantage is accepted in most cases where cost, weight and size are major considerations. The problem comes about because in "forwards" running the major thrust face of the piston is on the back face of the cylinder which, in a two-stroke particularly, is the coolest and best lubricated part. The forward face of the piston in a trunk engine is less well-suited to be the major thrust face since it covers and uncovers the exhaust port in the cylinder, the hottest part of the engine, where piston lubrication is at its most marginal. The front face of the piston is also more vulnerable since the exhaust port, the largest in the engine, is in the front wall of the cylinder. Piston skirts and rings risk being extruded into this port, so it is always better to have them pressing hardest on the opposite wall (where there are only the transfer ports in a crossflow engine) and there is good support. In some engines, the small end is offset to reduce thrust in the intended rotational direction and the forward face of the piston has been made thinner and lighter to compensate - but when running backwards, this weaker forward face suffers increased mechanical stress it was not designed to resist. This can be avoided by the use of crossheads and also using thrust bearings to isolate the engine from end loads.
Large two-stroke ship diesels are sometimes made to be reversible. Like four-stroke ship engines (some of which are also reversible) they use mechanically operated valves, so require additional camshaft mechanisms. These engine use crossheads to eliminate sidethrust on the piston.
On top of other considerations, the oil-pump of a modern two-stroke may not work in reverse, in which case the engine will suffer oil starvation within a short time. Running a motorcycle engine backwards is relatively easy to initiate, and in rare cases, can be triggered by a back-fire. It is not advisable.
Model airplane engines with reed-valves can be mounted in either tractor or pusher configuration without needing to change the propeller. These motors are compression ignition, so there are no ignition timing issues and little difference between running forward and running backward.
یشه لغویاین عبارت ترجمه عبارت انگلیسی Four-cycle-Engine است و به موتورهایی اتلاق میشود که کار خود را در چهار کورس پیستون انجام میدهند. (حرکت پیستون از بالاترین مکان خود در سیلندر تا پایینترین جای خود در سیلندر را یک کورس پیستون میگویند). در بیان فنی این موتورها را موتورهای با چرخه چهار مرحلهای میگویند که معادل عبارت Four-Stroke-cycle-Engine است.
دید کلیبطور کلی موتورهای احتراق داخلی بر مبنای دفعات توان در هر دور چرخش موتور به دو دسته کلی موتورهای دو زمانه و موتورهای چهار زمانه تقسیم میشوند. موتورهای دوزمانه از لحاظ ساختاری سادهترند لیکن موتوهای چهارزمانه کارایی بیشتری دارند.
تاریخچهاولین قدم مهم برای توسعه موتورهای چهارزمانه در اواسط قرن نوزدهم میلادی انجام گرفت. در این زمان یک مهندس فرانسوی به نام «بودور شا» چهار اصل اساسی را برای کار کردن موتورهای احتراقی ارائه کرد. که در واقع توسعه این اصول و بکارگیری آنها باعث ساخته شدن موتورهای چهارزمانه گردید. این اصول به قرار زیرند:
- اتاقک احتراق باید کوچکترین نسبت سطح به حجم ممکن را داشته باشد.
- فرآیند انبساط باید تا حد ممکن سریع انجام شود.
- تراکم در ابتدای مرحله انبساط باید تا حد امکان زیاد باشد.
- کورس انبساط میبایست تا حد امکان زیاد باشد.
انواع موتورهای چهار زمانهموتورهای چهار زمانه به دو دسته کلی تقسیم میشوند که عبارتند از :
- موتورهای اشتعال جرقهای :
در این موتورها برای مشتعل ساختن سوخت از یک جرقه استفاده میشود.
- موتورهای دیزل :
در این موتورها برای مشتعل ساختن سوخت از حرارت ایجاد شده در محفظه سیلندر و اتاقک احتراق استفاده میشود (این حرارت بالا به علت فشردگی زیاد سیال ایجاد میشود).
در حالیکه در موتورهای دیزل در مرحله مکش هوای خالی به داخل محفظه سیلندر مکیده میشود و در مرحله تراکم نیز فقط هوای خالی در اتاقک انفجار فشرده میشود لیکن میزان فشردگی در موتورهای دیزل بیشتر از موتورهای اشتعال جرقهای است. این فشردگی بالا باعث ایجاد حرارت زیادی میگردد که به محض ورود سوخت در مرحله توان باعث احتراق آن میگردد.
ساختمان موتور چهارزمانهموتورهای چهارزمانه خود گروهی از موتورهای احتراق داخلی هستند. موتورهای احتراق داخلی برای کار کردن به یک سری قطعات و سیستمها نیازمندند. نظیر سیستم سوخت رسانی ، بدنه موتور ، سیستم سوپاپها ، سیستم خنک کننده و ... لیکن موتورهای چهارزمانه دارای مکانسیمهایی میباشند که انجام چهار مرحله مکش ، تراکم ، توان و تخلیه را به صورت مجزا ممکن میسازد (در موتورهای دوزمانه مراحل مکش و توان و تخلیه و تراکم با هم انجام میشوند) این مکانسیمها عبارتند از:
- سیستم سوخت رسانی و تنظیم سوخت
- سیستم سوپاپها:که عمل ورود و خروج گازها را بطور دقیق کنترل می کند
- مانیفولد هوا و مانیفولد دود
- سیستم زمان بندی اشتعال
طرز کارطرز کار هر دو نوع موتورهای چهارزمانه یعنی موتورهای اشتعال جرقهای و موتورهای دیزل تا حد زیادی شبیه به هم است. لیکن در مواردی نیز با یکدیگر تفاوت دارد در ذیل اصول کلی کار موتورهای چهارزمانه را ذکر میکنیم.
- مرحله اول (مرحله مکش) :
در این مرحله سوپاپ ورودی هوا همزمان با حرکت رو به پایین پیستون درون سیلندر باز میشود. با این عمل مخلوط هوا و سوخت (در موتورهای اشتعال جرقهای) و هوای خالی (در موتورهای دیزل) وارد محفظه سیلندر شده و آنجا را پر میکند.
- مرحله دوم (مرحله تراکم) :
این مرحله از لحظهای شروع میشود که پیستون از پایینترین نقطه مکانی خود شروع میکند به حرکت رو به بالا. در این مرحله هر دو سوپاپ هوا و دود بستهاند. پیستون سیال موجود در محفظه سیلندر را در داخل اتاقک احتراق واقع در سه سیلندر فشرده میکند.
- مرحله سوم (مرحله توان) :
در این مرحله سیال موجود در اتاقک احتراق منفجر میگردد (در موتورهای اشتعال جرقهای اینکار بوسیله یک جرقه الکتریکی و در موتورهای دیزل بواسطه تزریق سوخت انجام میشود) در این مرحله نیز سوپاپها بستهاند. انرژی آزاد شده از سوختن مواد فسیلی باعث ایجاد نیروی فشارندگی پیستون میگردد که باعث پایین رفتن پیستون میشود.
- مرحله چهارم (مرحله تخلیه) :
در این مرحله گازهای ناشی از سوختن سیال تمام محفظه سیلندر را پر کردهاند در این مرحله سوپاپ دود باز میشود تا گازهای داغ ناشی از احتراق را از طریق مانیفولد دود از موتور خارج کند. حرکت رو به بالای سیلندر نیز به عمل تخلیه گازها کمک میکند.
کاربردموتورهای چهارزمانه امروزه پرکاربردترین موتورهای احتراقی هستند که در طیف وسیعی از خودروها به کار میروند. و علت آن نیز شتاب بالای این موتورها و نیز کارآیی و انعطاف پذیری زیاد این موتورهاست.
مباحث مرتبط با عنوان
- انواع خودرو
- زمان بندی اشتعال
- سیستم سوپاپ
- سیستم سوخت رسانی
- چرخه اتو
- چرخه دیزل
- زمان بندی سوپاپها
- موتور درون سوز
- موتور دوزمانه
- موتور اشتعال جرقهای
- موتور دیزل
- میل لنگ
In electrical engineering, power engineering and the electric power industry, power conversion is converting electric energy from one form to another, converting between AC and DC, or just changing the voltage or frequency, or some combination of these. A power converter is an electrical or electro-mechanical device for converting electrical energy. This could be as simple as a transformer to change the voltage of AC power, but also includes far more complex systems. The term can also refer to a class of electrical machinery that is used to convert one frequency of alternating current into another frequency.
Power conversion systems often incorporate redundancy and voltage regulation.
One way of classifying power conversion systems is according to whether the input and output are alternating current (AC) or direct current (DC), thus:
There are also devices and methods to convert between power systems designed for single and three-phase operation.
The standard power frequency varies from country to country, and sometimes within a country. In North America and northern South America it is usually 60 hertz (Hz), but in many other parts of the world, is usually 50 Hz. Aircraft often use 400 Hz power, so 50 Hz or 60 Hz to 400 Hz frequency conversion is needed for use in the ground power unit used to power the airplane while it is on the ground.
Certain specialized circuits, such as the flyback transformer for a CRT, can also be considered power converters.
Consumer electronics usually include an AC adapter (a type of power supply) to convert mains-voltage AC current to low-voltage DC suitable for consumption by microchips. Consumer voltage converters (also known as "travel converters") are used when travelling between countries that use ~120 V vs. ~240 V AC mains power. (There are also consumer "adapters" which merely form an electrical connection between two differently shaped AC power plugs and sockets, but these change neither voltage nor frequency.)
- Cascade converter
- Rotary converter
- Three-phase electric power
goal of control engineering design is to obtain the configuration, specifications, and
identification of the key parameters of a proposed system to meet an actual need.
The first step in the design process consists of establishing the system goals. For
example, we may state that our goal is to control the velocity of a motor accurately.
The second step is to identify the variables that we desire to control (for example,
the velocity of the motor).The third step is to write the specifications in terms of the
accuracy we must attain.This required accuracy of control will then lead to the identification of a sensor to measure the controlled variable.
As designers, we proceed to the first attempt to configure a system that will result in the desired control performance. This system configuration will normally
consist of a sensor, the process under control, an actuator, and a controller, as shown
in Figure 1.9. The next step consists of identifying a candidate for the actuator. This
will, of course, depend on the process, but the actuation chosen must be capable of
effectively adjusting the performance of the process. For example, if we wish to control the speed of a rotating flywheel, we will select a motor as the actuator. The sensor, in this case, will need to be capable of accurately measuring the speed. We then
obtain a model for each of these elements.
The next step is the selection of a controller, which often consists of a summing
amplifier that will compare the desired response and the actual response and then
forward this error-measurement signal to an amplifier.
The final step in the design process is the adjustment of the parameters of the
system in order to achieve the desired performance. If we can achieve the desired
performance by adjusting the parameters, we will finalize the design and proceed to
document the results. If not, we will need to establish an improved system configuration and perhaps select an enhanced actuator and sensor. Then we will repeat the
design steps until we are able to meet the specifications, or until we decide the specifications are too demanding and should be relaxed. The control system design
process is summarized in Figure 1.22.
The performance specifications will describe how the closed-loop system
should perform and will include (1) good regulation against disturbances, (2) desirable responses to commands, (3) realistic actuator signals, (4) low sensitivities, and
The design process has been dramatically affected by the advent of powerful
and inexpensive computers and effective control design and analysis software. For
example, the Boeing 777, which incorporates the most advanced flight avionics of
any U.S. commercial aircraft, was almost entirely computer-designed [62, 63]. Verification of final designs in high-fidelity computer simulations is essential. In many applications, the certification of the control system in realistic simulations represents a
significant cost in terms of money and time. The Boeing 777 test pilots flew about
2400 flights in high-fidelity simulations before the first aircraft was even built.
Another notable example of computer-aided design and analysis is the McDonnell Douglas Delta Clipper experimental vehicle DC-X, which was designed, built,
and flown in 24 months. Computer-aided design tools and automated code-generation
contributed to an estimated 80 percent cost savings and 30 percent time savings .
24 Chapter 1 Introduction to Control Systems
If the performance does not meet the specifications,
then iterate the configuration and the actuator.
If the performance meets the
specifications, then finalize the design.
1. Establish control goals
7. Optimize the parameters and
analyze the performance
5. Obtain a model of the process, the
actuator, and the sensor
4. Establish the system configuration
and identify the actuator
3. Write the specifications
for the variables
2. Identify the variables to control
6. Describe a controller and select
key parameters to be adjusted
The control system
In summary, the controller design problem is as follows: Given a model of the
system to be controlled (including its sensors and actuators) and a set of design
goals, find a suitable controller, or determine that none exists. As with most of engineering design, the design of a feedback control system is an iterative and nonlinear
process. A successful designer must consider the underlying physics of the plant
under control, the control design strategy, the controller design architecture (that is,
what type of controller will be employed), and effective controller tuning strategies.
In addition, once the design is completed, the controller is often implemented in
hardware, hence issues of interfacing with hardware can surface. When taken together, these different phases of control system design make the task of designing
and implementing a control system quite challenging
fossil fuel utilization on the quality of our air are well-documented. The problem is
that many nations have an imbalance in the supply and demand of energy. Basically,
they use more than they produce. To address this imbalance, many engineers are
considering developing advanced systems to access other sources of energy, including wind energy. In fact, wind energy is one of the fastest-growing forms of energy
generation in the United States and in other locations around the world. A wind
farm now in use in western Texas is illustrated in Figure 1.21.
In 2002, the installed global wind energy capacity was over 31,000 MW. In the
United States, there was enough energy derived from wind to power over 3 million
homes (according to the American Wind Energy Association). For the past 30 years,
researchers have concentrated on developing technologies that work well in high
wind areas (defined to be areas with a wind speed of at least 6.7 m s at a height of
10 m). Most of the easily accessible high wind sites in the United States are now utilized, and improved technology must be developed to make lower wind areas more
cost effective. New developments are required in materials and aerodynamics so
that longer turbine rotors can operate efficiently in the lower winds, and in a related
problem, the towers that support the turbine must be made taller without increasing
the overall costs. In addition, advanced controls will have to be employed to enable
the level of efficiency required in the wind generation drive train.
Advances in alternate energy products, such as the hybrid automobile and the
generation of efficient wind power generators, provide vivid examples of mechatronics development. There are numerous other examples of intelligent systems
poised to enter our everyday life, including smart home appliances (e.g., dishwashers, vacuum cleaners, and microwave ovens), wireless network enabled devices,
“human-friendly machines”  which perform robot-assisted surgery, and implantable sensors and actuators.