Sign in or Register

Fictron Industrial Supplies Sdn Bhd
No. 7 & 7A,
Jalan Tiara, Tiara Square,
Taman Perindustrian Sime UEP,
47600 Subang Jaya,
Selangor, Malaysia.
+603-8023 9829
+603-8023 7089
Fictron Industrial
Automation Pte Ltd

140 Paya Lebar Road, #03-01,
AZ @ Paya Lebar 409015,
Singapore.
+65 31388976
sg.sales@fictron.com

Latest News

Turck Releases Ultrasonic RU50 Eco Sensors for Cost-effective, Reliable Object Detection

May 30, 2019
Turck Releases Ultrasonic RU50 Eco Sensors for Cost-effective, Reliable Object Detection
View Full Size
MINNEAPOLIS, MN (May 29, 2019) - Turck expands its range of ultrasonic sensors with the RU50 Eco series by adding analog variants to the existing switching versions. With a plastic threaded barrel and a sensing range of 500 mm, the sensors are a cost-effective, high-performance detection solution for manufacturing applications such as tote or container detection, level monitoring, packaging applications and more.
 
The RU50 Eco sensors use the latest sonic transducer technology for reliable object detection. Teachable switching points and sensing distance promise users can assemble the sensors to discover objects between the sensor and the reference point. Even bad lighting conditions, glossy or reflective surfaces have no influence on the sensor. PNP and NPN switching versions as well as analog 4-20mA and 0-10v are available.
 
The plastic threaded barrel is made of sturdy liquid crystal polymer (LCP), and output options include an M12 connector or a 2 m cable. The M12 connector features a translucent Ultem end cap that houses an integrated LED to give the user 360-degree visibility of the switch condition. Retro-reflective variants are also available.



This article is originally posted on Tronserve.com

Renishaw¡¯s Blue Laser Sets New Standard for On-Machine Tool Measurement

May 30, 2019
Renishaw¡¯s Blue Laser Sets New Standard for On-Machine Tool Measurement
View Full Size
Building on the success of the enhanced NC4 range of tool setters launched in 2017, the NC4+ Blue is Renishaw’s latest evolution of the non-contact tool setter, delivering a step-change in tool measurement accuracy, with tool-to-tool performance proven to the most up-to-date ISO230-10 standards.
 
West Dundee, IL - May 29, 2019 - Renishaw, a precision engineering and manufacturing technologies company, announces the launch of its hottest non-contact tool setting solution. The new NC4+ Blue system joins the many smart factory process control solutions developed by Renishaw which have been established to help machine shops across many industries transform their production capabilities.
 
Building on the success of the enhanced NC4 range of tool setters launched in 2017, the NC4+ Blue is Renishaw's latest evolution of the non-contact tool setter, delivering a step-change in tool measurement precision, with tool-to-tool performance proven to the most up-to-date ISO230-10 standards.
 
Featuring the industry's first blue laser technology (patent pending) and improved optics, Renishaw's NC4+ Blue systems deliver significant improvements in tool measurement accuracy, ensuring components can be machined more precisely and with reduced cycle times.
 
Compared to red laser sources found in traditional non-contact tool setters, blue laser technology has a shorter wavelength, resulting in improved diffraction effects and optimized laser beam geometry. This enables the measurement of very small tools, while minimizing tool-to-tool measurement errors - a critical consideration when machining with a wide range of cutting tools.
 
NC4+ Blue systems also use Renishaw's latest non-contact tool setting software packages, which include a new dual measurement mode with auto optimization technology. Blended, these features ensure fast and reliable tool measurement - even in wet conditions - saving users time and money.
 
NC4+ Blue support is now embedded into Renishaw's extensive range of graphical user interfaces, including on-machine and mobile apps such as Renishaw Set and Inspect, and GoProbe. These consistent, easy-to-use programming platforms are awesome for users who are new to probing or have little machine code knowledge, while still supplying operational gains to more experienced users.
 
Renishaw technologies provide the data that permits intelligent decision-making for Industry 4.0. On-machine tool measurement allows manufacturers to automate and optimize their processes and minimize quality problems and CNC machine stoppages. With the hottest version of Renishaw's on-machine Reporter app, users can now view historical tool data captured by the NC4+ Blue and export the results for use in their chosen software and control systems.



This article is originally posted on Tronserve.com

Omron Helps University of Houston Engineering Students Gain Real-World Skills

May 30, 2019
Omron Helps University of Houston Engineering Students Gain Real-World Skills
View Full Size
Hoffman Estates, IL., May 29, 2019 - The University of Houston's Cullen College of Engineering recently unveiled a cutting-edge laboratory donated by the Omron Foundation, the charitable arm of automation solutions provider Omron in the United States. Developed for electrical and computer engineering students, the lab includes advanced technologies and equipment donated by Omron.
 
At the lab's opening ceremony, UH faculty and Omron representatives looked at a range of senior capstone projects, including a sorting robot and a cellular robotic billboard. The lab has an area dedicated to senior design projects, which produces real world design experience, which is helpful for gaining employment after graduation.
 
'Prospective employers will expect them to speak intelligently about what they worked on for their design project so the experience they obtain at this stage is really crucial,' says Len Trombetta, the associate department chair. 'This makes our graduates very marketable because these are skills companies need. We're thankful to Omron for making this possible.'
 
Omron Automation Americas President, CEO and COO Robb Black described the importance of preparing today's students for the hottest challenges in engineering and manufacturing. 'We desire to bring the skills they have learned in school into the manufacturing sector,' says Black. 'I think it's a awesome way for students to learn real-world technology and apply it once they leave.'
 
Omron Foundation has been supporting the Cullen College's electrical and computer engineering students since 2010, when it established the Omron Scholarship in electrical engineering and sponsored a team of students applying their engineering knowledge to real-world industry problems in the Capstone Design course. Omron also gives one-on-one mentoring to UH engineering students.



This article is originally posted on Tronserve.com

Tiny Robots Carry Stem Cells Through a Mouse

May 30, 2019
Tiny Robots Carry Stem Cells Through a Mouse
View Full Size
Engineers have developed micro-robots to perform all sorts of tasks in the body, and can now add to that list another key skill: delivering stem cells. In a paper released today in Science Robotics, researchers describe propelling a magnetically-controlled, stem-cell-carrying bot through a live mouse.
 
Under a rotating magnetic field, the micro-robots moved with rolling and corkscrew-style locomotion. The scientists, led by Hong Soo Choi and his team at the Daegu Gyeongbuk Institute of Science & Technology (DGIST), in South Korea, also showed their bot’s tactics in slices of mouse brain, in blood vessels separated from rat brains, and in a multi-organ-on-a chip.
 
The invention gives an alternative way to offer stem cells, which are extremely essential in medicine. Such cells can be coaxed into becoming nearly any kind of cell, making them great candidates for treating neurodegenerative disorders such as Alzheimer’s.
 
But sending stem cells commonly demand an injection with a needle, which lowers the survival rate of the stem cells, and controls their reach in the body. Micro-robots, however, have the potential to deliver stem cells to precise, hard-to-reach areas, with less damage to surrounding tissue, and better survival rates, says Jin-young Kim, a principle investigator at DGIST-ETH Micro-robotics Research Center, and an author on the paper.
 
The virtues of micro-robots have encouraged several research groups to propose and test different designs in simple conditions, such as micro-fluidic channels and more static environments. A group out of Hong Kong last year described a burr-shaped bot that carried cells through live, transparent zebra-fish.
 
The new research presents a magnetically-actuated micro-robot that effectively carried stem cells through a live mouse. In additional experiments, the cells, which had differentiated into brain cells such as astrocytes, oligodendrocytes, and neurons, transferred to micro-tissues on the multi-organ-on-a-chip. Taken together, the proof-of-concept experiments demonstrate the potential for micro-robots to be used in human stem cell therapy, says Kim.
 
Under a rotating magnetic field, the micro-robots moved with rolling and corkscrew-style locomotion. The researchers, led by Hongsoo Choi and his team at the Daegu Gyeongbuk Institute of Science & Technology (DGIST), in South Korea, also exhibited their bot’s moves in slices of mouse brain, in blood vessels isolated from rat brains, and in a multi-organ-on-a chip.
 
The invention provides an alternative way to deliver stem cells, which are progressively essential in medicine. Such cells can be coaxed into becoming nearly any kind of cell, making them great candidates for treating neurodegenerative disorders such as Alzheimer’s.
 
But delivering stem cells regularly demands an injection with a needle, which lowers the survival rate of the stem cells, and limits their reach in the body. Micro-robots, however, have the potential to deliver stem cells to precise, hard-to-reach areas, with less damage to surrounding tissue, and better survival rates, says Jin-young Kim, a principle investigator at DGIST-ETH Micro-robotics Research Center, and an author on the paper.
 
The virtues of micro-robots have motivated several research groups to propose and test different designs in simple conditions, such as micro-fluidic channels and other static environments. A group out of Hong Kong last year described a burr-shaped bot that carried cells through live, transparent zebrafish.
 
The new research presents a magnetically-actuated micro-robot that successfully carried stem cells through a live mouse. In additional experiments, the cells, which had separated into brain cells such as astrocytes, oligodendrocytes, and neurons, transferred to micro-tissues on the multi-organ-on-a-chip. Taken together, the proof-of-concept experiments demonstrate the potential for micro-robots to be used in human stem cell therapy, says Kim. 
 
The team fabricated the robots with 3D laser lithography, and fashioned them in two shapes: spherical and helical. Using a rotating magnetic field, the scientists navigated the spherical-shaped bots with a rolling motion, and the helical bots with a corkscrew motion. These styles of locomotion proved more efficient than that from a simple pulling force, and were more right for use in biological fluids, the scientists reported.
 
The big challenge in navigating micro-bots in a live animal (or human body) is being inclined to see them in real time. Imaging with fMRI doesn’t work, because the magnetic fields interfere with the system. “ To correctly control micro-bots in vivo, it is significant to actually see them as they move,” the authors wrote in their paper.
 
That wasn’t viable during experiments in a live mouse, so the researchers had to check the location of the micro-robots before and after the experiments using an optical tomography system called IVIS. They also had to resort to using a pulling force with a permanent magnet to navigate the micro-robots inside the mouse, due to the limitations of the IVIS system.
 
Kim says he and his colleagues are developing imaging systems that will enable them to view in real time the locomotion of their micro-robots in live animals.



This article is originally posted on 
Tronserve.com

The World's Largest Car Manufacturers

May 29, 2019
The World's Largest Car Manufacturers
View Full Size
Fiat Chrysler Automobiles (FCA) has submitted a proposition for a merger with the Renault Group that would likely create the third largest automobile manufacturer in the world. According to the proposal, the newly formed group is going to be co-owned by FCA and Renault shareholders at a 50-50 split and feature an unbiased number of seats on the Board of Directors.
 
Fiat Chrysler, the group behind popular brands such as Jeep, Dodge, Ram and Alfa Romeo, was the seventh most sizeable automobile group in the world last year, selling 4.8 million vehicles as a whole. Adding Renault’s 3.88 million cars sold in 2018 to that total would create the third largest player in the industry led by Volkswagen and Toyota.
 
As reported by FCA’s proposal, the merger could result in synergies of more than €5 billion per year, not predicating on plant closures. “Combining the businesses will bring together complementary strengths”, the proposal reads, referring to geographic and segment coverage as well as to FCA’s capabilities in autonomous driving and Renault’s expertise with regards to electric vehicles. Renault’s Board of Directors is reserved to discuss about the proposal on Monday, and then the company will issue a press release. Shareholders reacted positively to the news, with both companies’ shares up more than 10 percent in European trading on Monday.

 
This article is originally posted on TRONSERVE.COM

Servitization for Sensors

May 29, 2019
Servitization for Sensors
View Full Size
Servitization is not really a new phenomenon. Manufacturers of industrial equipment have long provided extended warranties, future assessments and extensive service contracts as part of their sales pitch. Today however, the rapid pace of change in North American manufacturing is resulting a remarkable increase in the servitization of products. The reason for this spike? Sensors.
 
Many servitization business models rely on the use of sensors in their products. Using data and information gathered by these sensors inserted into equipment, the service provider can regularly observe the state of their product. Then, they can provide the necessary upgrades, repairs or maintenance to the product when it is required, thus adding extra value for the customer.
 
Consider an industrial heating ventilation and air conditioning (HVAC) system as an illustration. If an internal sensor identifies an air conditioning unit is not ventilating air to the correct temperature, the HVAC supplier can be alerted at once. This allows the fault to be dealt with quickly and of course, takes the responsibility of noticing, repairing and paying for repairs away from the end-user.
 
But, what is the case for the servitization of sensors themselves?
 
Servitization is occurring in all industries. The manufacturing sector has already begun looking at the potential of Robots as a Service (RaaS) and has universally adopted various Software as a Service (SaaS) models—think Supervisory Control and Data Acquisition (SCADA) and Manufacturing Execution Systems (MES) as an example. As a matter of fact, when the idea of Sensors as a Service first emerged, the abbreviation had to be relegated to S2aaS, to avoid confusion.
 
Using a subscription model, S2aaS would enable manufacturers to pay a monthly or annual charge for the deployment of sensors in their facility. The cost would include maintenance, support and regular upgrades. Appears to be ideal, but the servitization of sensors is not without its challenges.
 
Integration With Existing Sensors
 
The most apparent problem is the integration of new sensors with existing technology. Most manufacturers will use an array of equipment in their facility, all from numerous manufacturers, models and production years. Some of this older machinery won't be equipped with sensors at all. However, some newer devices may have propriety sensors built in that can be difficult to swap.
 
In these instances, it is important to make certain that the S2aaS provider can either collect data from your existing sensors to compile a complete report, or, is advanced enough to warrant replacing the existing sensing technology. For example, some standard sensors are limited to indicating the presence or absence of an object, whereas smart sensors can provide up to 32 bytes of cyclical data—which could provide much more value.
 
Ensuring Sensors are Smart
 
Conveyor systems are frequently fitted with sensors to ensure they only operate when they are carrying products, thus saving energy. A standard sensor can simply diagnose if a product is present, whereas a smart sensor uses a combination of motion, proximity, weight and image sensing to detect how many products are on the conveyor and how effectively the conveyor is operating.
 
Other than this, smart sensors can also be used to observe the health of equipment in a facility. Using the conveyor system as an illustration, accelerometer sensing can be used to monitor the vibration of the equipment, demonstrating when there may be a mechanical problem or a sign of breakdown. With this in mind, switching existing sensors for advanced versions could be extremely worthwhile.
 
Before switching, nonetheless, manufacturers should ensure that the sensors offered by the S2aaS provider are advanced enough to provide an improvement. Sensors enabled with IO-Link technology, for example, can communicate much more data. IO-Link is an open standard protocol that provides the typical communication for a sensor’s parameters and features. 
 
Ensuring that sensors are enabled with IO-Link technology means that other IO-Link devices can connect to the sensors. By doing this, manufacturers can gather a much more detailed wealth of data than standard sensors could deliver.
 
Assessing Sensor Requirements
 
Before embarking on a subscription-based model for smart sensors, manufacturers ought to first decide whether they actually need to collect more detailed data from their facility, and whether the investment will provide benefits to their business.
 
Another consideration is whether a subscription-based model will provide better value for money than a normal, purchase-based model. For instance, some sensors are reliant on frequent damage due to the application in which they operate. Food and beverage manufacturing are good examples of this.
 
Caused by regular washdowns of equipment and varying temperatures during the manufacturing process, sensors used in food manufacturing oftentimes require repairing or switching. Extreme temperatures particularly can put tremendous strain on sensors, limiting their ability to collect and report data.
 
In these circumstances, opting for a S2aaS can assure that sensors will be repaired and replaced as soon as necessary — without the unanticipated costs associated with buying a brand-new sensor.
 
Like all servitization business models, S2aaS could provide many benefits to the customer. Using a subscription-based service, manufacturers can be confident their sensors will be supervised, repaired and upgraded whenever required. However, it is crucial to remember that not all servitization models can guarantee the same level of service.
 
Before jumping on the S2aaS bandwagon, it is a must that manufacturers do their own research to ensure their service provider can provide real added-value.
 
This article is originally posted on TRONSERVE.COM

Ryder Opens New Maintenance Facility

May 29, 2019
Ryder Opens New Maintenance Facility
View Full Size
Ryder System, Inc., a commercial fleet management, committed transportation, and supply chain solution provider, just recently announced the opening of a new state-of-the-art full-service maintenance facility in Norton, MA. The grand opening event, held in late April, featured a ribbon-cutting ceremony and tours of the facility.
 
The facility opening in Norton displays the strong growth Ryder is experiencing in the area as the company recently outgrew its facility in West Bridgewater, MA. The West Bridgewater place is going to continue to operate, and the two will work in tandem to compliment the significant customer base in the region.
 
“This new facility located in Norton will help us to better serve our expanding customer base in Massachusetts,” says Dennis Cooke, Ryder President of Fleet Management Solutions. “Its convenient location will make us to provide lots of different transportation services and allows us room to be expanded as we further develop our business in the region.”
 
The new Ryder facility, positioned at 60 Commerce Way, Norton, MA 02766, offers customers convenient access to the major highways of Interstate 495, Route 24, and Route 95, servicing the greater Boston area, as well as Cape Cod and the MetroWest region.
 
The greater Boston area is a leading 25 growth market in the United States, and Norton, MA, is an optimal location for serving regular and new customers in this market, specifically companies from the seafood and beverage industries.
 
Features of the New Norton Facility
 
The facility is built on 19.4 acres of land, features a 24,000 square-foot building, and 13 maintenance work bays outfitted to offer maintenance for Ryder ChoiceLease and rental customers. In addition, a two-lane diesel fuel service island is available for Ryder customers.
 
The Norton facility also includes a full-service rental counter for businesses in search of a commercial vehicle, and a 7,300 square-foot office that houses a rental reception and lease sales office, customer service area, and a drivers’ lounge. The site also accommodates a 2,160 square-foot drive-thru wash bay with a fully automated truck wash system, a lube equipment room, and a battery charging room with storage space.
 
Ryder is an $8.4 billion commercial fleet management, specialized transportation, and supply chain solutions company, with operations in the U.S., Canada, Mexico, and the U.K. The Company, founded in 1933, runs behind the scenes, operating critical transportation and logistics functions for more than 50,000 customers, many of which make the products that consumers use every day. Ryder employs 39,600 people, manages a fleet of more than 270,000 commercial vehicles, and operates more than 55 million square feet of warehouse space.
 
This article is originally posted on TRONSERVE.COM

IoT Conference Traces Changing Industry

May 29, 2019
IoT Conference Traces Changing Industry
View Full Size
At IoT World, held in Santa Clara, California, from May 13-16, 2019, manufacturers joined up with companies from a broad number of industries to network and explore how the Internet of Things is converting the way people do business. Zach Butler, portfolio director for IoT World at Informa Tech, sat down with Manufacturing.net to deliver an inside look at the show.
 
Started in 2014 by Informa, IoT World has expanded dramatically along with its subject matter. In 2019 about 12,500 people attended, up from 700 people the first year. Butler mentioned that while the show expanded by leaps and bounds from 2014 to 2016, it has seen sustained but more gradual growth after the initial spike in interest. This matches the rate of adoption among companies; the show was primed to catch the rising adoption of IoT. Now, nine dissimilar tracks that span IoT allow companies that use multiple applications to network and learn based on particular subjects.
 
This year, he said the show floor is having more physical/industrial applications and vehicles. From automotive, IoT is trickling into other sectors. 
 
“Automotive is probably the peak of application because automotive covers everything,” Butler said. “Auto manufacturing is one of the most precision industries out there, and then car makers need to make vehicles that can save lives. So they use lots of different technology in there.”
 
Butler designated that he considers of IoT as a movement, not a technology. Of course, the term covers an array of devices and systems. Some see artificial intelligence as a component of IoT, while Butler speaks of it as an analytic tool sitting on top of some of the IoT capabilities in wide use now.
 
As for the path to the future, Butler said that he’s anticipating more companies transfer to edge computing and on-premise compute. Many companies are choosing to use edge computing and not going to cloud computing.
 
“We’re still in the cloud revolution,” Butler said. “Companies are going to alter and reinvent and miss out on capturing potential new areas of business.” For example, he said, these areas include data services—information as a service. “[Manufacturers can] own more of the life cycle of the customer with maintenance and training.”
 
For companies that are already involved in As-A-Service support but want to step further into analysis and other applications of IoT, Butler said, look at your goal first, not what technology might be presented to you. “Start with the outcome. Are you looking to improve the quality of your product? Or are you planning to find new business models? The minute companies know that, they need to know how to change … and IoT provides data to create those outcomes.”
 
This article is originally posted on TRONSERVE.COM

MVTec and Hilscher join forces

May 29, 2019
MVTec and Hilscher join forces
View Full Size
Munich, May 28, 2019 - MVTec Software GmbH (www.mvtec.com), the trusted provider of modern machine vision software, and Hilscher Gesellschaft für Systemautomation mbH (www.hilscher.com), a market leader for PC cards for industrial communication, begun a technical partnership to enable easier integration of machine vision and process automation. Combining MVTec software products with Hilscher PC cards enables powerful machine vision applications to be integrated easily and seamlessly into any process control system. For example, MVTec's machine vision software MERLIC can communicate universally with all commercial programmable logic controllers (PLCs). The partnership rewards customers by perfectly bundling two leading and compatible technologies.
 
Optimized process integration into MVTec MERLIC
As for MVTec, the optimized procedure integration is largely based on the application programming interface (API) of the cifX PC card family from Hilscher. This API is a standard interface for all PC cards which are supplied by the manufacturer for all common form factors. Users of MVTec MERLIC and MVTec HALCON can select from among all popular Fieldbus and Real-Time Ethernet industrial protocols, including PROFINET, EtherCAT, and many other standards.
 
Thanks to Hilscher's constant platform strategy, all cifX PC cards use the standardized API as well as the same drivers and tools, regardless of the protocol or card format. The integration of the multiprotocol netX processor enables all Real-Time Ethernet protocols to be realized with just one hardware. To switch from one protocol to another, only the firmware has to be reloaded.
 
'The blend of MVTec and Hilscher products proves how easy it is to join the two 'worlds' of machine vision and process automation,' says Christoph Zierl, Technical Director at MVTec Software GmbH, about the collaborative effort. 'We look forwards to providing our customers this huge added value and hope to see many more machine vision solutions based on the exciting products from Hilscher and MVTec in the near future.'
 
'With our cifX PC card technology, we are delighted to offer HALCON and MERLIC users an interface between their automation network and machine vision software,' adds Tim Pauls, Product Manager at Hilscher Gesellschaft für Systemautomation mbH. 'The combination of MVTec and Hilscher technology provides users with a unique range of drivers, form factors, and network protocols, along with powerful machine vision software.'



This article is originally posted on Tronserve.com

SAKOR TECHNOLOGIES AND SAJ TEST PLANT PVT LTD JOIN FORCES

May 29, 2019
SAKOR TECHNOLOGIES AND SAJ TEST PLANT PVT LTD JOIN FORCES
View Full Size
SAKOR Technologies Inc., a well known leader in the area of high-performance dynamometer systems, has offered its association with SAJ Test Plant Private Ltd (Pune, India), which serves as the exclusive representative in India of SAKOR products, including the AccuDyne™ AC Dynamometer System, DynoLAB™ Test Cell Control System and other products for hybrid and electric vehicle testing and high voltage battery testing and simulation. Under the enhanced relationship, SAJ will increase its efforts in representing SAKOR products in the hybrid and electric vehicle market as well as products centered on engine and driveline transient and other superior testing.
 
SAKOR and SAJ recently displayed information on SAKOR's projects and hardware at the Symposium on International Automotive Technology, 2019 (SIAT 2019), held in Pune, India. Over the next week, the SAKOR/SAJ team met with a variety of potential customers at their facilities to discuss definite needs.
The preliminary target market is hybrid and electric vehicle development. The companies are working together to supply test equipment required to meet a present Indian government mandate calling for electrifying a large majority of vehicle fleet by 2030 to reduce pollution from gasoline and diesel engines. With the fast responses offered by the DynoLAB system, hybrid and electric development teams can test with a drive cycle similar to what the vehicle sees, including quick transients for torque and speed.
Under this enhanced agreement, SAKOR and SAJ will also partner collectively to offer more types of sophisticated testing systems, including advanced diesel transient testing.
Highlighting the significance of this association, Mr. Prakash Jagtap, Chairman and MD SAJ said 'We are happy to offer high quality and sophisticated equipment to those companies which have been spurred into action by the challenging requirement to transition from existing conventional vehicles to hybrid or electric vehicles by 2030. This agreement puts both companies in a strong position to address these new opportunities.'



This article is originally posted on Tronserve.com

Additive Manufacturing as a Production Technology

May 29, 2019
Additive Manufacturing as a Production Technology
View Full Size
(Cranfield, United Kingdom, 23rd May 2019). Uptake of AM processes for production applications is still quite low, reasons for this including high capital and running costs; consumable costs (particularly for refined, metal powders); inconsistent material properties which are a significant prohibitor for critical components; extensive pre-and post-processing requirements (and costs); and, often, a failure to understand when and how to apply AM to maximize its benefits.
 
It is within this framework that the European Society for Precision Engineering and Nanotechnology (euspen) will be hosting a Special Interest Group (SIG) meeting 16-18 September 2019 in the Ecole Centrale de Nantes, France, the 6th time that euspen has joined forces with the American Society of Precision Engineering (ASPE) focusing on important issues surrounding precision in AM.
 
Many of the existing commercial companies that provide AM platforms today are aware of the barriers to adoption, with many — mainly at the high-end of the system spectrum — refining and developing their respective processes by adding value propositions pre-, in-, and post-process, particularly for production applications.
 
The corresponding vernacular that has surfaced and is growing in use across the AM sector is 'end-to-end manufacturing solutions.' As this phrase implies, the emphasis is shifting away from the additive process itself, towards a comprehensive solution for production manufacturing that promises to address the barriers, and offer a compelling production proposition that incorporates the unique benefits of AM technologies, eliminates the identified and/or real challenges, and optimizes the performance and efficiency of the process.
 
This includes — in many cases — superior control software and man/machine interfaces; quality control via digital simulation in-process; validation; improved, qualified materials; and extended automation pre- and post build.
 
A key overarching potential for manufacturing applications with AM within an end-to-end solution is visibility across the complete process and the ability to detect risks and defects, which in turn offers traceability for quality control with particular reference to surface structure, dimensional accuracy, and part strength. This is true for one-off components, but even more so for series production where repeatability — in terms of consistency of materials and mechanical properties of the parts — is vital.
 
Automation of the overall process is also a really essential factor, especially in a series manufacturing environment. Materials handling pre- and post-build is a particularly laborious task that calls for keen attention to safety measures. For example, the metal powders that are fed into the machine, and often subsequently require recycling of unused powder, is probably the dirtiest secret of all when it comes to AM.
 
The euspen / ASPE SIG meeting will focus on such issues that are critical to the viability of AM as a production technology. The local hosts and organising committee for the SIG are Prof. Alain Bernard from Ecole Centrale de Nantes; Dr David Bue Pedersen from Technical University of Denmark; Prof. Richard Leach from University of Nottingham; and Dr John Taylor from University of North Carolina at Charlotte. The AM SIG meeting chair is Prof. Richard Leach from University of Nottingham, and Dr John Taylor from University of North Carolina at Charlotte.



This article is originally posted on Tronserve.com

Human Reflexes Help MIT's HERMES Rescue Robot Keep Its Footing

May 29, 2019
Human Reflexes Help MIT's HERMES Rescue Robot Keep Its Footing
View Full Size
A sudden, tragic wake-up call: That’s how many roboticists view the Fukushima Daiichi nuclear accident, caused by the massive earthquake and tsunami that struck Japan in 2011. Reviews following the accident outlined how high levels of radiation foiled workers’ attempts to carry away urgent measures, such as operating pressure valves. It was the perfect mission for a robot, but none in Japan or elsewhere had the capabilities to pull it off. Fukushima forced many of us in the robotics community to realize that we needed to get our technology out of the lab and into the world.
 
Disaster-response robots have made significant progress since Fukushima. Research groups around the world have demonstrated unmanned ground vehicles that can move over rubble, robotic snakes that can squeeze through narrow gaps, and drones that can map a site from above. Researchers are also building humanoid robots that can survey the damage and perform critical tasks such as accessing instrumentation panels or transporting first-aid equipment.
 
But despite the advances, constructing robots that have the same motor and decision-making skills of emergency workers stays a challenge. Pushing open a heavy door, discharging a fire extinguisher, and other quick but tough work require a level of coordination that robots have yet to master.
 
One way of compensating for this limitation is to use tele-operation — having a human operator remotely control the robot, either continuously or during specific tasks, to help it complete more than it could on its own.
 
Tele-operated robots have long been applied in industrial, aerospace, and underwater settings. More urgently, researchers have experimented with motion-capture systems to transfer a person’s movements to a humanoid robot in real time: You wave your arms and the robot mimics your gestures. For a completely immersive experience, special goggles can let the operator see what the robot sees through its cameras, and a haptic vest and gloves can provide tactile sensations to the operator’s body.
 
At MIT’s Biomimetic Robotics Lab, our group is driving the melding of human and machine even further, in hopes of accelerating the development of practical disaster robots. With support from the Defense Advanced Research Projects Agency (DARPA), we are developing a telerobotic system that has two parts: a humanoid capable of nimble, dynamic behaviors, and a new kind of two-way human-machine interface that sends your motions to the robot and the robot’s motions to you. So if the robot steps on debris and starts to lose its balance, the operator feels the same instability and instinctively reacts to avoid falling. We then catch that physical response and send it back to the robot, which helps it eliminate falling, too. Through this human-robot link, the robot can harness the operator’s innate motor skills and split-second reflexes to keep its footing.
 
You could say we’re putting a real human brain inside the machine.
 
Upcoming disaster robots will usually have a great deal of autonomy. Someday, we wish to be able to distribute a robot into a burning building to search for victims all on its own, or position a robot at a damaged industrial facility and have it put which valve it needs to shut off. We’re nowhere near that level of capability. Hence the growing interest in teleoperation.
 
The DARPA Robotics Challenge in the United States and Japan’s ImPACT Tough Robotics Challenge are among the present efforts that have demonstrated the possibilities of teleoperation. One reason to have humans in the loop is the unknown nature of a disaster scene. Navigating these chaotic environments demands a high degree of adaptability that current artificial-intelligence algorithms can’t yet achieve.
 
For illustration, if an autonomous robot encounters a door handle but can’t find a match in its database of door handles, the mission fails. If the robot gets its arm stuck and doesn’t know how to free itself, the mission fails. Humans, on the other hand, can readily deal with such situations: We can adapt and learn on the fly, and we do that on a daily basis. We can determine variations in the shapes of objects, cope with poor visibility, and even figure out how to use a new tool on the spot.
 
The same goes for our motor skills. Consider running with a heavy backpack. You may run slower or not as far as you would without the extra weight, but you can still carry out the task. Our bodies can adapt to new dynamics with surprising ease.
 
The tele-operation system we are creating is not created to replace the autonomous controllers that legged robots use to self-balance and perform other tasks. We’re still equipping our robots with as much autonomy as we can. But by coupling the robot to a human, we take advantage of the best of both worlds: robot endurance and strength in addition to human versatility and perception.
 
Our lab has long explored how biological systems can inspire the design of better machines. A particular limitation of existing robots is their inability to perform what we call power manipulation—strenuous feats like knocking a chunk of concrete out of the way or swinging an axe into a door. Most robots are designed for more delicate and precise motions and gentle contact.
 
We designed our humanoid robot, called HERMES (for Highly Efficient Robotic Mechanisms and Electromechanical System), specifically for this type of heavy manipulation. The robot is relatively light—weighing in at 45 kilograms—and yet strong and sturdy. Its body is about 90 percent of the size of an average human, which is big enough to allow it to naturally maneuver in human environments.
 
Instead of using regular DC motors, we made custom actuators to power HERMES’s joints, painting on years of experience with our Cheetah platform, a quadruped robot capable of explosive motions such as sprinting and jumping. The actuators include brushless DC motors coupled to a planetary gearbox—so called because its three “planet” gears revolve around a “sun” gear—and they can generate a large amount of torque for their weight. The robot’s shoulders and hips are actuated directly while its knees and elbows are driven by metal bars connected to the actuators. This makes HERMES less rigid than other humanoids, able to absorb mechanical shocks without its gears shattering to pieces.
 
The first time we run HERMES on, it was still just a pair of legs. The robot couldn’t even stand on its own, so we suspended it from a harness. As a simple test, we set its left leg to kick. We grabbed the first thing we discovered lying around the lab—a plastic trash can—and placed it in front of the robot. It was satisfying to see HERMES kick the trash can across the room.
 
The human-machine interface we built for controlling HERMES is different from conventional ones in that it relies on the operator’s reflexes to improve the robot’s stability. We call it the balance-feedback interface, or BFI.
 
The BFI took months and multiple iterations to develop. The initial concept had some resemblance to that of the full-body virtual-reality suits featured in the 2018 Steven Spielberg movie Ready Player One. That design never left the drawing board. We spotted that physically tracking and moving a person’s body—with more than 200 bones and 600 muscles—isn’t a straightforward task, and so we decided to start with a simpler system.
 
To work with HERMES, the operator stands on a square platform, about 90 centimeters on a side. Load cells measure the forces on the platform’s surface, so we know where the operator’s feet are pushing down. A set of linkages attaches to the operator’s limbs and waist (the human body’s center of mass, basically) and uses rotary encoders to accurately measure displacements to within less than a centimeter. But some of the linkages aren’t just for sensing: They also have motors in them, to apply forces and torques to the operator’s torso. If you strap yourself to the BFI, those linkages can apply up to 80 newtons of force to your body, which is sufficient to give you a good shove.
 
We set up two separate computers for controlling HERMES and the BFI. Each computer runs its own control loop, but the two sides are regularly exchanging data. In the starting of each loop, HERMES accumulates data about its posture and compares it with data accepted from the BFI about the operator’s posture. Based on how the data differs, the robot corrects its actuators and then immediately sends the new posture data to the BFI. The BFI then carries out a similar control loop to adjust the operator’s posture. This process repeats 1,000 times per second.
 
To let the two sides to operate at such fast rates, we had to condense the information they share. For example, rather than sending a detailed representation of the operator’s posture, the BFI sends only the position of the person’s center of mass and the relative situation of each hand and foot. The robot’s computer then scales these measurements proportionally to the dimensions of HERMES, which reproduces that research posture. As in any other two-way teleoperation loop, this coupling may cause vibration or instability. We decreased that by fine-tuning the scaling details that map the postures of the human and the robot.
 
To test the BFI, one of us (Ramos) volunteered to be the user. After all, if you’ve created the core parts of the system, you’re probably best equipped to debug it.
 
In one of the first tests, we tested an early balancing algorithm for HERMES to see how human and robot would behave when coupled together. In the test, one of the researchers used a rubber mallet to hit HERMES on its upper body. With every touch, the BFI exerted a similar jolt on Ramos, who reflexively shifted his body to regain balance, causing the robot to also catch itself.
 
Up to this point, HERMES was still just a pair of legs and a torso, but we fundamentally completed the rest of its body. We made arms that use the same actuators as those used by the legs and hands, made out of 3D-printed parts reinforced with carbon fiber. The head features a stereo camera, for streaming video to a headset worn by the operator. We also added a hard hat, just because.
 
In another round of experiments, we had HERMES punch through drywall, swing an axe against a board, and, with oversight from the local fire department, put out a controlled blaze using a fire extinguisher. Disaster robots will need more than just brute force, though, so HERMES and Ramos also performed tasks that demand more dexterity, like pouring water from a jug into a cup.
 
In each case, as the operator simulated performing the task while strapped to the BFI, we observed how well the robot mirrored those actions. We additionally looked at the scenarios in which the operator’s reactions could help the robot the most. When HERMES punched the drywall, for instance, its torso rebounded backward. Almost immediately, a corresponding force pushed the operator, who reflexively leaned forward, helping HERMES to adjust its posture.
 
We were set for more tests, but we knew that HERMES is too big and powerful for many of the experiments we wanted to do. Although a human-scale machine permits you to carry out reasonable tasks, it is also time-consuming to move, and it involves lots of safety precautions — it’s wielding an axe! Intending more dynamic behaviors, or even walking, proved difficult. We decided HERMES needed a little sibling.
 
Little HERMES is a scaled-down version of HERMES. Like its big brother, it uses custom high-torque actuators, which are affixed closer to the body rather than on the legs. This setting allows the legs to swing much faster. For a more compact design, we cut the number of axes of motion—or degrees of freedom, in robotic parlance—from six to three per limb, and we replaced the original two-toed feet with simple rubber spheres, each having a three-axis force sensor tucked inside.
 
Connecting the BFI to Little HERMES needed changes. There’s a big difference in scale between a human adult and this smaller robot, and when we tried to link their movements directly—mapping the position of the human’s knees and the robot’s knees, and so forth—it resulted in jerky motion. We needed a different mathematical model to mediate between the two systems. The model we came up with tracks parameters such as ground contact forces and the operator’s center of mass. It captures a sort of “outline” of the operator’s intended motion, which Little HERMES is able to execute.
 
In one experiment, we had the operator step in place, slowly at first and then faster. We were happy to see Little HERMES marching in just the same way. When the operator hopped, Little HERMES jumped too.
 
In a sequence of photos we took, you can see both human and robot in midair for a brief instant. We also placed pieces of wood underneath the robot’s feet as obstacles, and the robot’s controller was able to keep the robot from sliding.
 
Much of this was still preliminary work, and Little HERMES wasn’t freely standing or able to walk around. A supporting pole attached to its back prevented it from tipping forward. At some point, we’d like to develop the robot further and set it loose to amble around the lab and perhaps even outdoors, as we’ve done with Cheetah and Mini Cheetah (yes, it too has a little sibling).
 
Our next steps come with addressing a host of challenges. One of them is the mental fatigue that an operator experiences after using the BFI for extended periods of time or for tasks that call for a lot of concentration. Our experiments encourage that when you have to command not only your own body but also a machine’s, your brain tires quickly. The effect is specially pronounced for fine-manipulation tasks, such as pouring water into a cup. After repeating the experiment three times in a row, the operator had to take a break.
 



This article is originally posted on Tronserve.com

Leveraging Automation in Warehousing, Part 2

May 28, 2019
Leveraging Automation in Warehousing, Part 2
View Full Size
MNET:
What do warehouse managers need to know about potential pain points when it comes to safety and automation?
 
Adam Kline:
When you are talking about traditional conveyance and automation, those things don’t move. There are safety mechanisms if you need to get to one side of the conveyor to another, and as quickly as you lift a section it closes the whole conveyor down up until you can put the bridge back down. There are alot of safety measures in place for that static automation. But when you think about automated forklifts, AGVs, or collaborative robots, now you're utilizing a possibly large number of these robots. It’s usually twenty or 30 or more, and those will be communicating with the people. If you look at the evolution of these bots, the initial bots had to be in their own area and you didn’t see humans and bots interacting. The new ones have an array [of safety sensors].
 
As we were having a stand-up meeting there [at Locus] we were in their test warehouse where these things are out and about. There was a bot that came up to us and wanted to reach where we were standing. It would stop, pause, and know it couldn’t get through. So it was almost as if it was saying excuse me. Eventually it left. Then it came back and patiently waited, stood there a little closer and a little longer. Then it went away and came back. It stood there again and in the end we moved, and it was almost as if it said thank you and patiently went about what it desired to do.
 
In comparison with those old Kiva bots, relating to what Amazon has done with retrofitting some of its people with vests with RFID in them, and retrofitted the bots so they can interact a bit more. They have retrofit some hurdle prevention. I don’t know precisely how well that’s going to succeed, but maybe it’ll help.
 
The other component is congestion. It must be thought about. Some of our new capabilities include our take on waveless picking. It’s even more. It’s that orchestration of work we spoken of before. Intelligent, cognitive work release engine. It helps stabilize the work across multiple resources so you don’t end up with high congestion on one side and under-utilized resources on the others. It’s helping divvy up that work, so to speak, as capacity is revealed. It’s a systematic-driven way to achieve some of the points we talked about in terms of safety. It helps alleviate having people and robots bump into each other.
 
What are some signs that warehouse or supply chain managers should work on adding more automation?
 
When you were speaking about workforce and people becoming not easy to find, I think that’s certainly one. We will need to start taking into account creative ways to either improve efficiency of the good workers we have or augment that with automation. The other is, as your volumes are raising, so we see customers all the time with one of two things happening: either their business is just growing, so their existing channels are growing and they just need to get more work out of a given facility in the same time, and automation can help lessen that and give you more virtual potential. The second, which is more interesting and a recent trend, is, say I’m a brand supplier [such as] footwear or apparel guys that provide brand shipping to retailers, or I may just be a manufacturer providing goods to a wholesaler or direct. In both of those we’ve seen the same effect with ecommerce and direct consumer that quite often those suppliers are being asked to build out dropship programs on the behalf of their retail customers. That requires an entirely separate way of driving fulfillment. Where you were once driving larger, bulk case quantities and often full pallets, now you’re talking about single item orders, maybe one or two units. That’s a much different operational proposition. Often when you bring in a new discipline like that, automation can help.
 
Or signs that their technology is getting in their way and should be streamlined?
 
We have been considering this a decent amount lately, and it’s interesting because we think of automation as a means to help things move more effectively, [and to enable] hands-off decision making, like an ASRS [automated storage and retrieval system] for example. It’s making decisions of where to store goods, how to pull them out, etc. Some of our customers find that they develop their fixed resources. How do I cope with that? Some of our customers are looking at one of two solutions: either a different technology, different piece of automation, or augmenting it with something else on the side. Typically some of that technology has a pretty hefty price tag. So now that you’ve grown it, do you bring in another DC or bring in robotic tech to improve the areas around it? There's not a great way to scale that fixed asset up. Be sure you're planning well ahead, not for the volumes you have now but for the volumes ten years out. That’s one example.
 
The other example is somebody could be looking at something because it’s cool, it’s neat; that’s not the reason to procure technology. Sometimes somebody will put in something that their operations aren't just ready for, or the technology just is not there. You finish up with a Rube Goldberg machine.
 
This article is originally posted on TRONSERVE.COM

Leveraging Automation in Warehousing, Part 1

May 28, 2019
Leveraging Automation in Warehousing, Part 1
View Full Size
The warehouse automation industry is hurrying to use technology to keep up with the likes of Amazon. What do warehouse managers or supply chain managers have to know about the various automation options available? How do you know when your efficiency can be improved by adding more machinery, or when automation won’t actually help your process? Manufacturing.net sat down with Adam Kline, product director for warehouse management and supply chain intelligence at Manhattan Associates, for a two-part conversation to attempt to answer those questions.
 
This conversation has been edited for length and clarity.
 
MNET:
What kind of robotic assists or automated machines are being rolled out in the supply chain?
 
Adam Kline:
Mostly, there are two different kinds with a ton of different subtypes: traditional automation includes conveyance, print-and-applies, perhaps unit sorters and put walls and stuff that’s bolted to the ground. You can also put in ASRS [automated storage and retrieval] and shuttle systems, which are more modern and sophisticated. Opposed to this is where you get started to see things which are more dynamic in nature: robotic solutions, collaborative picking bots and those sort of things.
 
With respect to robotic assists there’s a number of different categories of those as well. We have come in touch with quite a few of vendors. I will identify two of our partners to start. One is Locus Robotics, which provides collaborative picking bots. We are official go-to-market partners with them and have quite a few joint installations. Interesting thing about their solutions is they're not there to replace people. That is one of the misnomers around robotics in the warehouse. Although some solutions are intended to be a headcount reducer, I think a majority are there to help make people more efficient in the DC and help make your people more efficient. With a traditional goods-to-person system like the old style Kiva [Systems] bots prior to Amazon buying them, the intent was the picking would kind of go away.
 
Those bots would go from a parcel to a packing place where people would be at a picking station. With new bots the pickers will still be out and about in the shop section and the bots will come and sort of visit! I say that because there's a bit of personality with these bots, which is interesting. And the picker will select into their totes, hit a button and move away. And the bots will go to another area and visit another picker. You have two different resources, the bots and the pickers, both of whom are moving. The pickers are confined to a specific area and the bots are free to move around. Which is a great example of collaborative picking. Interesting thing about it is you are lowering the amount of travel for each picker but not reducing it to zero. The physical movement of products from rack to tote is still in the picker’s hands. The bots are basically an overly flexible conveyance. Locus claims a pretty high improvement in terms of total efficiency in regards to each picker. We need to see more results before quoting their stats, but it looks to be more efficient.
 
Second is Kindred AI, a picker. Generally you either pick supply to a put wall or a tote. This bot is a substitute for a put wall. It's different from what you see in other warehouses because everything is generally straight and this is actually round. When you first see it it’s almost a bit disruptive. When you get closer you see the tote is dumped into a hopper inside this sort-bot, and a robotic arm picks up each individual item. As it picks it up, an array of scanners ensures it gets a good read on the barcode. As quickly as it gets the barcode, it sorts to a bin to order. It can continue to do this as items and orders come in. As soon as that is finish, some one can pack it from the other side. You've got automated the sorting process. These two solutions may possibly work together, although I do not know if anyone has done it that way.
 
Some administrators I've got spoken to have told me that automation is absolutely not a replacement for labor, but a reaction to a dwindling labor pool. Have you seen this across industries?
 
We completely have seen it. Our warehouse management solution has a number of attributes that are worthy of bringing up. We have a labor management system aswell, which is complimentary. Over the last two to three years we have brought our warehouse management and labor management capabilities very close together. You’d be hard pressed to know where one ends and the other starts. All the reasons that you just stated have driven us to that decision. I don’t have percentages at my fingertips, but if you look at the amount of qualified applications for a given position directionally, they have lowered substantially. One of my peers who manages our labor management solution brought a stat that said a growing number of companies, 25 to 30 percent, are actually hiring warehouse workers who have a criminal record. The quantity of competent applicants has gone down, so the quality of applicants has gone down. A lot of our clients are searching for alternatives, looking for ways to make their good workers even more efficient.
 
Like with the sort bots, these are kind of a labor replacement in that unique area in contrast to pick bots that work with people. This doesn't necessarily come with a headcount reduction, but those workers might be moved around.
 
This article is originally posted on TRONSERVE.COM

Germany's Bosch Fined $100 Million Over Diesel Scandal

May 28, 2019
Germany's Bosch Fined $100 Million Over Diesel Scandal
View Full Size
German prosecutors have fined auto parts and technology company Bosch 90 million euros ($100 million) over its role in the diesel emissions scandal that erupted at Volkswagen in 2015.
 
Prosecutors in Stuttgart said Thursday that the company, formally called Robert Bosch GmbH, was penalized for a irresponsible violation of supervisory obligations, and that the company had resolved not to appeal.
 
Bosch shipped millions of engine control systems which were installed on various manufacturers' cars beginning in 2008 and whose software, in prosecutors' words, 'contained in part prohibited strategies' — resulting to cars emitting more nitrogen oxide than permitted by regulators.
 
Nonetheless, prosecutors said they suspect that 'the initiative to integrate and shape the banned strategies came from employees of the auto manufacturers.'
 
They said that the fine does not affect ongoing criminal probes of Bosch employees. The bulk of the fine — 88 million euros — stems from profits on the sales of the parts, with the remaining 2 million euros covering the misdemeanor itself. Prosecutors said that they took account of Bosch managers' full and positive cooperation with investigators since 2015.
 
Bosch agreed with a $327.5 million civil settlement in the United States for supplying emissions software to Volkswagen, Audi and Porsche vehicles that enabled cheating on diesel emissions tests.
 
The diesel emissions scandal has cost Volkswagen itself billions of euros.
 
This article is originally posted on TRONSERVE.COM

China Ramps Up War of Rhetoric in Trade Standoff With U.S

May 28, 2019
China Ramps Up War of Rhetoric in Trade Standoff With U.S
View Full Size
Stepping forward Beijing's propaganda unpleasant in the tariffs deadlock with Washington, Chinese state media on Friday accused the U.S. of seeking to 'colonize global business' with strikes over Huawei and other Chinese technology companies.
 
There was no word from either side on progress toward restarting conversations between the world's two largest economies, though President Donald Trump said he expected to meet with his Chinese counterpart, Xi Jinping, next month at a G-20 meeting in Japan.
 
Dialogues over how to cut the huge, longstanding U.S. trade debt with China and resolve complaints over Beijing's methods for attaining advanced foreign technologies foundered earlier this month after Trump raised tariffs on billions of dollars of imports from China.
 
At a regular briefing Friday, foreign ministry spokesman Lu Kang accused American politicians he didn't name of 'fabricating various lies based on subjective presumptions and trying to mislead the American people.'
 
The China Daily, an English-language newspaper, reported U.S. expressions of issues about Chinese surveillance equipment maker Hikvision were for the self-serving aim of claiming the 'moral high ground' to stimulate Washington's political agenda.
 
'In this way, it is hoping to achieve the colonization of the global business world,' the newspaper stated.
 
Hikvision said in a statement Friday that it takes U.S. concerns about its business seriously and is working to make certain it complies with human rights standards.
 
Activists have always been recommending the U.S. and other countries to sanction China over repression of members of Muslim minority ethnic groups in the northwestern Xinjiang region, where an estimated 1 million people are being taken into custody in re-education camps.
 
The New York Times reported the U.S. Commerce Department might possibly put Hikvision on its 'entity list,' restraining its business with U.S. companies for its alleged role in supporting surveillance in Xinjiang. In its statement, the company said it had 'engaged with the U.S. government relating to all of this since last October.'
 
Hikvision said it had maintained former U.S. Ambassador-at-large Pierre-Richard Prosper of the firm Arent Fox to guide the company regarding human rights compliance. 'Over the past year, there have been many reports about ways that video surveillance products have been involved in human rights violations,' the statement said. 'We read every report seriously and are listening to voices from outside the company.'
 
In South Korea, officials said they were discussing security obstacles pertaining to its 5G, or fifth generation, cellphone networks with the U.S.
 
Officials in South Korea's Foreign Ministry and presidential office could not, however, confirm the report by the Chosun Ilbo newspaper that U.S. officials want Seoul to block a local wireless carrier that uses Huawei equipment for its 5G services from unspecified 'sensitive areas.'
 
Washington looks at Huawei, the world's leading supplier of telecom gear and No. 2 smartphone maker, a security threat. Huawei has sought to ease those concerns and has declined assertions that it would facilitate spying by Beijing.
 
It's not clear whether Seoul would allow potential U.S. demands to close imports of Huawei products at risk of triggering retaliation from China, its biggest trade partner.
 
A U.S. business group reported Friday that its members' operations in China are facing raising pressure from trade friction after the Trump administration imposed 25% tariffs on $250 billion in Chinese imports, with plans to extend those duties to another $300 billion — basically all the goods America buys from China.
 
'The negative impact of tariffs is clear and hurting the competitiveness of American companies in China,' the American Chamber of Commerce in China and AmCham Shanghai said in announcing the results of a study of nearly 250 companies conducted May 16-20.
 
China has increased tariffs on $110 billion of U.S. products and has said it's prepared to do more to guard its national interest. The report said about 40 of the companies interviewed were being the subject of more inspections or slower customs clearance. Just over half have yet to experience any impact from such non-tariff retaliatory measures.
 
To cope, companies are centering more on the China market, it said, instead exporting to the U.S., and delaying or canceling investment decisions.
 
This article it originally posted on TRONSERVE.COM

New Optimization Chip Tackles Machine Learning, 5G Routing

May 28, 2019
New Optimization Chip Tackles Machine Learning, 5G Routing
View Full Size
Engineers at Georgia Tech say they’ve come up with a programmable prototype chip that efficiently solves a huge class of optimization problems, including those needed for neural network training, 5G network routing, and MRI image reconstruction. The chip’s architecture embodies a particular algorithm that breaks up one huge problem into many small problems, works on the subproblems, and shares the results. It does this over and over until it comes up with the best answer. Compared to a GPU running the algorithm, the prototype chip—called OPTIMO—is 4.77 times as power efficient and 4.18 times as fast.
 
The training of machine learning systems and a broad variety of other data-intensive work can be cast as a set of mathematical problem called constrained optimization. In it, you’re trying to minimize the value of a function under some constraints, explains Georgia Tech professor Arijit Raychowdhury. For instance, training a neural net could involve finding the lowest error rate under the constraint of the size of the neural network.
 
“If you can improve [constrained optimization] using smart architecture and energy-efficient design, you will be able to accelerate a big class of signal processing and machine learning problems,” says Raychowdhury. A 1980s-era algorithm called alternating direction technique of multipliers, or ADMM, turned out to be the solution. The algorithm solves gigantic optimization problems by breaking them up and then reaching a solution over several iterations.
 
“If you wish to solve a large problem with a lot of data—say one million data points with one million variables—ADMM allows you to break it up into smaller subproblems,” he says. “You can cut it down into 1,000 variables with 1,000 data points.” Each subproblem is solved and the results incorporated in a “consensus” step with the other subproblems to reach an interim solution. With that interim solution now incorporated in the subproblems, the process is replicated over and over until the algorithm arrives at the optimal solution.
 
In a typical CPU or GPU, ADMM is limited because it needs the movement of a lot of data. So instead the Georgia Tech group developed a system with a “near-memory” architecture.
 
“The ADMM framework as a method of solving optimization problems maps nicely to a many-core architecture where you have memory and logic in close proximity with some communications channels in between these cores,” says Raychowdhury.
 
The test chip was made up of a grid of 49 “optimization processing units,” cores manufactured to perform ADMM and having their own high-bandwidth memory. The units were associated to each remaining in a way that speeds ADMM. Portions of data are spread to each unit, and they set about solving their individual subproblems. Their results are then collected, and the data is altered and resent to the optimization units to do the next iteration. The network that connects the 49 units is specifically designed to speed this gather and scatter process.
 
The Georgia Tech team, which included graduate student Muya Chang and professor Justin Romberg, unveiled OPTIMO at the IEEE Custom Integrated Circuits Conference last month in Austin, Tex.
 
The chip could be scaled up to do its work in the cloud—adding more cores—or shrunk down to solve problems closer to the edge of the Internet, Raychowdhury says. The principal constraint in optimizing the number of cores in the prototype, he jokes, was his graduate students’ time.



This article is originally posted on Tronserve.com

C3D Labs Updates 3D Model Viewer for 2019

May 28, 2019
C3D Labs Updates 3D Model Viewer for 2019
View Full Size
Moscow, Russia: May 27 2019 - C3D Labs, the developer of highly-regarded 3D software development toolkits, announced today that it has released C3D Viewer 2019, an update to its free application for viewing 3D CAD models. The viewer handles standard CAD (computer-aided design) formats, such as JT, STEP, X_T and X_B, SAT, IGES, STL, and VRML, and its own C3D format. §³3D Viewer 2019 incorporates some of the geometric modeling, data conversion, and model visualization functions found in C3D Modeler, C3D Converter, and C3D Vision - all elements of its C3D Toolkit.
 
The model viewer, first released by C3D Labs in 2017, gives users full control over demonstrating 3D models, such as navigating and orientating models, setting standard views, switching perspective projections, and adjusting levels of detail. It also plays back animations at varying speeds. This made C3D Viewer useful to both CAD and non-CAD users who access 3D CAD models.
In its update for 2019, C3D Labs extends the capabilities of C3D Viewer by delivering the following advanced, high-performance tools:
New! Dynamic Section Tool
The new dynamic section tool lets users view and examine internal parts of 3D models through sections made with one or more planes. Through the use of OpenGL, 3D Viewer gives results quicker than CAD systems, which commonly create sections by adjusting the topology of solids.
 
New! Measurement and Calculation Tools
C3D Viewer 2019 now measures the most significant aspects of geometric models: angles, distances between objects, edge lengths, surface areas, and so on. The resulting linear, diametrical, and angular dimensions are displayed clearly in the model window. The viewer also considers masses, volumes, surface areas, centers of mass, and moments of inertia.



This article is originally posted on Tronserve.com

How to improve the production efficiency of seamless pipes

May 28, 2019
How to improve the production efficiency of seamless pipes
View Full Size
The first crucial way to improve the output of rolled thick-walled seamless steel pipe is to improve the speed of the rolling mill. In order to increase the rolling speed, it is required to first address the stability method and specific construction of the inertia force and inertia torque of the rolling mill frame, the strength and rigidity of each component, and the problems of lubrication, cooling, and service life.
 
Boosting the amount of feed and growing the elongation are another effective measure to enhance the output of the mill. For this reason, the use of an annular hole block is an ideal solution, which is excellent for stretching the frame stroke without improving the diameter of the roll.
 
The third way to multiply the output of cold-rolled pipe mills is to increase the effective working coefficient of the mill, which will involve a variety of problems. On the one hand, the design of the rolling mill is involved to be reasonable, and the manufacturing precision is high to minimize maintenance and maintenance downtime. On the other hand, it is required to improve the mechanization and automation level of the rolling mill to decrease the auxiliary operation time. The third aspect is also a very important development direction. It is essential to realize the cold rolling mill without stopping, continuous loading and continuous rolling.
 
The fourth measure to increase the output of the rolling mill is to increase the number of tubes simultaneously rolled. It should be sharpened out that the increase in the number of lines does not raise the output of the tube mill by an integral multiple, and the preciseness of the finished tube will be reduced for the intermediate process, it is completely available.



This article is originally posted on Tronserve.com

Mitsubishi Electric Automation Series of Human Machine Interfaces

May 28, 2019
Mitsubishi Electric Automation Series of Human Machine Interfaces
View Full Size
Mitsubishi Electric Automation, Inc. announces that its GT27 Series of human-machine interfaces (HMI) now includes the option for an HDMI port, allowing users to avoid the time and cost of setting up a PC to accomplish the same feat. Production managers and maintenance personnel can improve from observing what the machine operator sees on a large public screen, allowing them to respond faster to required actions.
 
Mitsubishi Electric's GT27 Series of HMIs contains a sturdy selection of functions and features such as gesture operations, remote connectivity, human sensors, and two USB ports. The addition of an HDMI port allows easy addition of visualization and communication equipment to a factory floor when numerous people need to see information as it is it displayed on the HMI.
 
'An HDMI port is not a common feature among HMIs in the marketplace,' said Lee Cheung, product marketing engineer at Mitsubishi Electric Automation, Inc. 'Most HMIs require a PC in order to project the HMI contents on a larger screen. But with the GT27 Series, all you need is a cable.'
 
Any manufacturing or processing facility can gain from displaying machine data in a space where it can be seen by additional employees. Those in the automotive industry may find it very useful by incorporating it into their Andon system to visualize what is happening on their manufacturing floor.



This article is originally posted on Tronserve.com

You have 0 items in you cart. Would you like to checkout now?
0 items
Switch to Mobile Version