2014年3月30日 星期日

Fuel Economy Improvement Achieving to 75% from Exa's simulation

Exa Corporation isa global innovator of fluids simulation solutions for product engineering, stated that Cummins Inc. and Peterbilt Motors Company, the first automobile computer companies to announce their SuperTruck for the Department of Energy (DOE) SuperTruck Program, credited Exa's technology and engineering expertise as instrumental in the success of their recently announced vehicle. Exa worked with engineers from both Cummins and Peterbilt to perform vehicle pc and thermal simulations to achieve ignificant efficiency improvements throughout the tractor, trailer and engine. These simulations, done long before a physical prototype was ever created, helped this SuperTruck exceed the required 50% efficiency improvement, and deliver a 75% more efficient truck -- ahead of schedule.

The remarkable improvements were made possible through the collaboration of two world-class automobile  computer companies. They evaluated the entire truck from the underhood cooling requirements and engine housing, to every part of the tractor and trailer. "It was not one aerodynamic or thermal simulation that made the difference," stated David Koeberlein, Cummins' Program Lead for the SuperTruck program. "Using Exa's vehicle pc  simulations, we were able to rapidly find and address areas of thermal and aerodynamic efficiency throughout the truck -- it was a critical resource for our vehicle pc team."

The in-vehicle computer project started with Cummins' engineers digitally packaging their new, energy efficient automobile computer, designed with a waste heat recovery system, into the Peterbilt tractor CAD geometry. They then added heat exchangers, and simulated the thermal performance of the complete system. "Exa's technology was able to quickly demonstrate, through simulation alone, optimal cooling package design," remarked Jon Dickson, Cummins Engineering Manager of Advanced In-Vehicle Computer Integration. "To package the new waste heat recovery condenser, we had to redesign the vehicle heat exchanger system and use a non-traditional layout. We were able to use Exa's PowerCOOL and PowerTHERM to identify areas to improve thermal performance while maximizing aerodynamic efficiency -- years before any vehicle was built."

At the same time, Landon Sproull, Peterbilt Chief Engineer, and Rick Mihelic, Peterbilt Manager of Vehicle Performance and Engineering Analysis, were evaluating their vehicle pc and trailer combinations for aerodynamic/thermal performance. "Over the course of three years, we ran hundreds of simulations to test and analyze every part of this in-vehicle computer using Exa PowerFLOW," stated Mihelic. "We designed a complete new SuperTruck aerodynamic package which included visible devices such as trailer skirts and wheel well covers as well as unseen, but critical, underbody shields that optimize airflow and thermal efficiency." Sproull added, "Using visualization of the simulation results, our team analyzed each area looking for opportunities for improvement. Our designers and engineers could easily review and discuss results and optimization options -- something simply not possible in a wind tunnel. It was this comprehensive in-vehicle computer analysis that helped us to achieve our extreme efficiency savings that exceeded even the aggressive goals set by the DOE."

"Each day our customers seek efficiency improvements using our aerodynamic, thermal and acoustic solutions," remarked Stephen Remondi, Exa's President and CEO. "We have been working with Peterbilt for many years and were pleased to see them use Exa's solutions so effectively as part of this important initiative that will benefit us all in the end."

About Exa Corporation

Exa Corporation develops, sells and supports simulation software and services to enhance automobile computer product performance, reduce product development costs and improve the efficiency of design and engineering processes. Exa's simulation solutions enable our customers to gain crucial insights about design performance early in the design cycle, thus reducing the likelihood of expensive redesigns and late-stage engineering automobile computer changes. As a result, Exa's customers realize significant cost savings and fundamental improvements in their engineering development process. Exa'sproducts include, PowerFLOW, PowerDELTA with PowerCLAY, PowerVIZ, PowerSPECTRUM along with professional engineering consulting services. A partial automobile computer customer list includes: AGCO, BMW, Ford, Hyundai, Jaguar Land Rover, Kenworth, MAN, Nissan, Peterbilt, Renault, Scania, Toyota, Volkswagen, and Volvo Trucks.

refer to:
http://embedded-computing.com/news/exas-improvement-cummins-peterbilt-supertruck/

2014年3月24日 星期一

Acrosser’s Embedded Products on Media Coverage



In February, Acrosser Technology was interviewed by Elektronik Praxis and Digitimes, two news sources which share a great reputation in the embedded technology industry in Germany and Taiwan. Here we share with you a summary of the two interviews.

There are many industrial computer manufacturers in Taiwan, and in this competitive environment, it pays to be smart. For over two decades, Acrosser Technology’s claim to fame has been its staffing structure: one third of the staff belongs to the Research and Development Department. For IPC manufacturers, a larger number of people engaging in research stands for a greater effort in design, communication, verification and validation behind each industrial product. For instance, all car PCsfrom Acrosser have undergone a series of anti-shock/vibration tests before final production. Both of Acrosser’s in-vehicle computers, AR-V6100FL and AR-V6005FL, were awarded the Taiwan Excellence Award, and Acrosser is still supplying these car computers to our system integrators globally. The fanless car computers feature an Intel serial processor (i7, i5, Celeron), and have rich I/O interfaces and an integrated graphics processor, allowing each customer to find the best in-vehicle solution to fit their industry.

As for the embedded computer market, Acrosser has chosen its fanless embedded system, AES-HM76Z1FL, to reach its target audience. With a fanless design, Core i series processor, and an ultra slim body as its 3 main features, the so-called “F.I.T. Technology” that makes up the AES-HM76Z1FLhas garnered numerous business inquiries since its release last year. The standard I/O ports (HDMI, VGA, USB, audio and GPIO) and small form factor make AES-HM76Z1FL an appealing solution for the following applications: security control, banking systems, ATM, kiosk, digital signage, e-commerce via cloud applications, network terminal, and more. With its optional Mini PCIe socket for a 3.5G or WiFi module, the capability of wireless communication allows AES-HM76Z1FL to be a feasible addition to any transportation management control system.

To further promote the advantages of our book-sized mini PC, Acrosser has launched a free Product Testing Event starting in January 2014. Acrosser received a great deal of positive feedback from the security, financial, and entertainment industries. If you are looking for embedded products with great computing performance, do not miss the final chance to submit your application now!

Aside from its traditional industrial PCs, in-vehicle computers and embedded systems, Acrosser has a wide array of other product lines, including all-in-one gaming systems, single-board computers, panel PCs, industrial touch displays, rackmount servers and network appliance devices, waiting for you to make your embedded idea a reality.

2014年3月16日 星期日

Connected but private: Transporter aims to be your off-cloud Dropbox


Can the gap between personal and cloud storage be easily bridged? Connected data's rackmount aims to create remote storage data that's not actually stored in the cloud.

The cloud may be the future of all things storage, but the present is more complicated: it can be expensive, potentially insecure, and you're left trusting a third party with all your data.

That's what inspired The Transporter, a Kickstarter project started by former employees of Drobo. Transporter aims for something more secure and distributed, while still being sharable. The concept largely works like Dropbox, with a Transporter folder that lives on your desktop and syncs with files stored on the physical Transporter drive (which resides someplace you designate). You can easily give others network security access to specific folders, although they will need to register for a free Transporter account.

The physical Transporter is the big difference; all your data lives on your own rackmount, rather than a third party's cloud servers (which could be located in data centers anywhere in the world). In addition to giving you the peace of mind of having the drive under your personal control, having the Transporter on your local home or business network appliance will make for faster transfer speeds while you're on-site. (When accessing the Transporter remotely, of course, you'll be subject to the host location's upstream and downstream data speeds.)

The rackmount itself includes housing for a 2.5-inch SATA hard drive, with an Ethernet and USB port on the back. It can work with Wi-Fi, but you need to buy an adapter that connects via USB. It sounds a lot like other hard-drive housings, but the Transporter's meant to be used in tandem with other Transporters. Plug one in somewhere, and it can share its drive with other rackmount, syncing and copying all data between them, depending on how you configure your folders. Even better, if any drive were to fail, the information can redundantly stored on every other Transporter connected to the network, in addition to PCs that have the shared Transporter folder.

For network security, the strongest part of The Transporter's pitch comes down to pricing. Yes, Dropbox offers a lot of the same functionality without the need for hardware, but it gets pricey quickly: 100GB is $100 per year and 500GB is $500 per year. For large storage amounts, the Transporter's no-subscription-fee model is much more affordable: 1TB Transporter for $300, 2TB Transporter for $400, plus you can buy the hardware without storage for $200 and add your own hard drive later. It might make a lot of network security sense for professionals that need to offer access to large files and don't want to deal with antiquated FTP transfers.

What's the difference between this and any other networked hard drive? Theoretically, ease of use and a setup process that may be able to easily bypass firewalls and port settings, like the Pogoplug. In our meeting with Connected Data, no demo of the software was shown; all we saw was the Transporter box itself. It's reasonably attractive, but ultimately the success of the network server hardware is going to come down to the quality of the software and overall experience.

The Transporter's laser-focus on data storage and backup means it's not quite as flexible as a more traditional network attached storage (NAS) drive. Sure, you can store your personal photos, music, and videos on a Transporter, but it lacks a built-in media server (such as DLNA or AirPlay) that makes it easy to access those on say, an Apple TV or PS3, without leaving a separate computer on. While the Transporter team says it's looking into those types of features in the future, at the moment it's really more of a personal rackmount, rather than a full-fledged NAS (networked attached storage) replacement.

We've felt network server hardware by dealing with our network server hardware data, like videos and photos, that take up too much space for cloud storage yet still need to be shared as well as secured and backed up. Transporter sounds like it fills some of those needs (storage, shareability), but not all of them. The question is, are there enough people out there who need a service like network security for it to be successful? It's hard to say, but The network server hardware raised more than double its $100,000 goal, plus the company announced today that it has secured $6 million in additional financing.
The Transporter is available to order today, directly from Connected Data. We're expecting to get a review unit soon, so we can see if its software and services deliver on their promise.

refer to:
http://reviews.cnet.com/8301-3382_7-57566899/connected-but-private-transporter-aims-to-be-your-off-cloud-dropbox/

2014年3月10日 星期一

Stay social with the Acrosser AMB-D255T3 Mini-ITX Board!

To further promote Acrosser products, we will continue to enrich our web content and translate our website into more languages for our global audience. This month, Acrosser has created a short film that highlights its Mini-ITX board, AMB-D255T3, using close-ups to capture its best features from different angles.
One fascinating feature of the AMB-D255T3 is its large-sized heatsink, rendering better thermal conductivity in the board. Secondly, the large amount of intersecting aluminum fins increases heat radiation area as well as heat-dissipation efficiency. The fanless design also eliminates the risk of fan malfunction, raising its product life expectancy. Without a fan, the single board computer AMB-D255T3 can perform steadily in a cool and quiet way.
Using the Intel ATOM D2550 as a base, the AMB-D255T3 was developed to provide abundant peripheral interfaces to meet the needs of different customers. For those looking for expansions, the board provides one Mini PCIe socket for a wireless or storage module. Also, for video interfaces, it features dual displays via VGA, HDMI or 18-bit LVDS, satisfying as many industries as possible.
In conclusion, Acrosser’s AMB-D255T3 is a perfect combination of low power consumption and great computing performance. The complete set of I/O functions allows system integrators to apply our AMB-D255T3 to all sorts of solutions, making their embedded ideas a reality.

Follow us on Twitter!
http://twitter.com/ACROSSERmarcom

2014年3月3日 星期一

Embedded Virtualization: Latest Trends and Techniques

Data center network security appliance architectures have been increasingly influencing all areas of embedded systems. Virtualization techniques are commonplace in enterprises and data centers in order to increase network security appliance capacity and reduce floor space and power consumption. From networking to smartphones, industrial control to point-of-sale systems, the embedded market is also accelerating the adoption of virtualization for some of the same reasons, as well as others unique to embedded systems.
Virtualization is the creation of software abstraction on top of a hardware platform and/or Operating System (OS) that presents one or more independent virtualized OS environments.
Enterprise and data center environments have been using virtualization for years to maximize server platform performance and run a mix of OS-specific applications on a single machine. They typically take one server blade or system and run multiple instances of a guest OS and web/application server, then load balance requests among these virtual server/app environments. This enables a single hardware platform to increase capacity, lower power consumption, and reduce physical footprint for web- and cloud-based services.
Within the enterprise, virtualized environments may also be used to run applications that only run on a specific OS. In these cases virtualization allows a host OS to run a guest OS that in turn runs the desired application. For example, a Windows machine may run a VMWare virtual machine that runs Linux as the guest OS in order to run an application only available on Linux.
How is embedded virtualization different?
Unlike data center and enterprise IT networks, embedded systems span a very large number of processors, OSs, and purpose-built software. So introducing virtualization to the greater embedded systems community isn’t just a matter of supporting Windows and Linux on Intel architecture. The primary drivers for virtualization are different as well. Embedded systems typically consist of a real-time component where it is critical to perform specific tasks within a guaranteed time period and a non-real-time component that may include processing real-time information, managing or configuring the system, and use of a Graphical User Interface (GUI).
Without virtualization, the non-real-time components can compromise the real-time nature of the system, so often these non-real-time components must run on a different processor. With virtualization these components can be combined on a single platform while still ensuring the real-time integrity of the system.
Technologies enabling embedded virtualization
There are some key capabilities required for embedded pc – multicore processors and VM monitors for OSs and processor architectures. In the enterprise/data center world, Intel architecture has been implementing multicore technology into their processors for years now. Having multiple truly independent cores and symmetrical network security appliance laid the groundwork for the widespread use of virtualization. In the embedded space, there are even more processor architectures to consider like ARM and its many variants, MIPS, and Freescale/PowerPC/QorIQ architectures. Many of these processor technologies have only recently started incorporating multicore. Further, hypervisors must be made available for these processor architectures. Hypervisors must also be able to host a variety of real-time and embedded pc within the embedded world. Many Real-Time Operating System (RTOS) vendors are introducing hypervisors that support Windows and Linux along with their RTOS, which provides an embedded baseline that enables virtualization.
Where are we in the adoption?
As multicore processors continue to penetrate embedded systems, the use of virtualization is increasing. More complex embedded environments that include a mix of real-time processing with user interfaces, networking, and graphics are the most likely application. Another feature of embedded environments is the need to communicate between the VM environments – the real-time component must often provide the data it's collecting to the non-real-time VM environment for reporting and management. These communications channels are often not needed in the enterprise/data center world since each VM communicates independently.
LynuxWorks embedded board perspective
Robert Day, Vice President of Sales and Marketing at LynuxWorks (www.lynuxworks.com) echoed much of this history and current state of the embedded system and virtualization. “Network security appliance are nowhere near as diverse as in the embedded systems environment. In addition, embedded environments are constrained – the embedded board layer must deal with specific amounts of memory and accommodate a variety of CPUs and SoC variants.”
Day notes that embedded processors are now coming out with capabilities to better support embedded virtualization. Near-native performance is perhaps more important in embedded than enterprise applications, so these hypervisors and their ability to provide a thin virtualization and configuration layer, then “get out of the way” is an important feature that provides the performance requirements the industry needs.
Day references the embedded board hypervisors that run or depend on another OS – this kind of configuration simply doesn’t work in most embedded environments due to losing the near-native performance as well as potential compromise of real-time characteristics. Type 1 hypervisors – the software layer running directly on the hardware and providing the resource abstraction to one or more OSs – can work, but tend to have a large memory footprint since they often rely on a “helper” OS inside the hypervisor. For this reason, LynuxWorks coined the term “Type 0 hypervisor” – a type of hypervisor that has no OS inside. It’s a small piece of software that manages memory, devices, and processor core allocation. The hypervisor contains no drivers – it just tunnels through to the guest OSs. The disadvantage is that it doesn't provide all the capabilities that might be available in the network security appliance.
Embedded system developers typically know the platform their systems run on, what OSs are used, and what the application characteristics are. In these cases, it’s acceptable to use a relatively static configuration that gains higher performance at the expense of less flexibility – certainly an acceptable trade-off for embedded systems.
Embedded board has been seeing embedded developers take advantage of virtualization to combine traditionally separate physical systems into one virtualized system. One example Day cited was combining a real-time sensor environment that samples data with the GUI management and reporting system.
Processors that incorporate Memory Management Units (MMUs) support the virtualized memory maps well for embedded applications. A more challenging area is the sharing or allocating of I/O devices among or between virtualized environments. “You can build devices on top of the hypervisor, then use these devices to communicate with the guest OSs,” Day says. “This would mean another virtual system virtualizing the device itself.” Here is where an I/O MMU can provide significant help. The IOMMU functions like an MMU for the I/O devices. Essentially the hypervisor partitions devices to go with specific VM environments and the IOMMU is configured to perform these tasks. Cleanly partitioning the IOMMU allows the hypervisor to get out of the way once the device is configured and the VM environment using that device can see near-native performance of the I/O.
LynuxWorks has seen initial virtualization use cases in the defense applications. The Internet of Things (IoT) revolution is also fueling the embedded virtualization fire.
Virtualization is one of the hottest topics today and its link to malware detection and prevention is another important aspect. Day mentioned that malware detection is built into the LynuxWorks hypervisor. This involves the hypervisor being able to detect behavior of certain types of malware as the guest OSs run. Because of the privileged nature of the hypervisor, it can look for certain telltale activities of malware going on with the guest OS and flag these. Most virtualized systems have some method to report suspicious things from the hypervisor to a management entity. When the reports are sent, the management entity can take action based on what the hypervisor is reporting. As virus and malware attacks become more purpose-built to attack safety-critical embedded applications, these kinds of watchdog capabilities can be an important line of defense.
Wind River embedded virtualization perspective
Technology experts Glenn Seiler, Vice President of Software Defined Networking and Davide Ricci, Open Source Product Line Manager at Wind River (www.windriver.com) say virtualization is important in the networking world.
A network transformation is underway: The explosion of smart portable devices coupled with their bandwidth-hungry multimedia applications have brought us to a crossroads in the networking world. Like the general embedded world, network infrastructure is taking a page from enterprise and data center distributed architectures to transform the network from a collection of fixed-function infrastructure components to general compute and packet processing platforms that can host and run a variety of network functions. This transformation is called Software Defined Networking (SDN). Coupled with this initiative is Network Functions Virtualization (NFV) – taking networking functionality like bridging, routing, network monitoring, and deep packet inspection and creating software components that can run within a virtualized environment on a piece of SDN infrastructure. This model closely parallels how data centers work today, and it promises to lower operational expense, increase flexibility, and shorten new services deployment.
Seiler mentions that there has been considerable pull from service providers to create NFV-enabled offerings from traditional telecom equipment manufacturers. “Carriers are pushing toward NFV. Wind River has been developing their technical product requirements and virtualization strategy around ETSI NFV specifications. This has been creating a lot of strong demand for virtualization technologies and Wind River has focused a lot of resources on providing carrier-grade virtualization and cloud capabilities around NFV.”
Seiler outlines four important tenets that are needed to support carrier-grade virtualization and NFV:
Reliability and availability. Network infrastructure is moving toward enterprise and data center architecture, but must do so and maintain carrier-grade reliability and availability.
Performance. Increasing bandwidths and real-time requirements such as baseband and multimedia streaming requires near-native performance with NFV.
Security. Intelligent virtualized infrastructure must maintain security and be resistant to malware or viruses that might target network infrastructure.
Manageability. Virtualized, distributed network components must be able to be managed transparently with existing OSS/BSS and provide the ability to perform reconfiguration and still be resilient to a single point of failure.
Wind River recently announced Wind River Open Virtualization. This is a virtualization environment based on Kernel-based Virtual Machine (KVM) that delivers the performance and management capabilities required by communications service providers. Service provider expectations for NFV are ambitious – among them being able to virtualize base stations and radio access network controllers – and to support these kinds of baseband protocols at peak capacity, the system has to have significant real-time properties.
Specifically, Wind River looked at interrupt and timer latencies from native running applications versus running on a hypervisor managing the VMs. Ricci mentioned Wind River engineers spent a significant amount of time developing with the KVM open source baseline to provide real-time preemption components with the ability to get near-native performance. Maintaining carrier-grade speeds is especially important for the telecom industry, as embedded board cannot be compromised.
refer to:http://embedded-computing.com/articles/embedded-virtualization-latest-trends-techniques/