Amazon DeepLens camera

Amazon targets developers with AI-equipped camera

Image credit: Amazon

Amazon Web Services (AWS), Amazon’s business arm, has revealed a wireless video camera called DeepLens which is intended to allow developers to utilise deep learning.

The camera has some similarities to Google’s Clips, a camera which uses machine learning to recognise objects, faces and actions and capture moments that may be considered interesting, such as interactions between a user and their pet. However, while Clips is primarily aimed at consumers, DeepLens is aimed at businesses.

The artificially intelligent DeepLens camera was unveiled yesterday, along with compatible computer vision and natural language-processing services from AWS. These are intended to help developers create cloud-based machine-learning services almost entirely with AWS tools.

The camera, which costs $249 (£185) will assist developers in the creation and testing of vision-based machine-learning software. It will be available for US customers from April 2018.

“When you build an app that runs on your AWS DeepLens, you can take advantage of a set of pre-trained models for image detection and recognition,” said an AWS blog post. “These models will help you detect cats and dogs, faces, a wide array of household and everyday objects, motions and actions and even hot dogs. We will continue to train these models, making them better and better over time.”

DeepLens uses a 4-megapixel camera for 1080p video, as well as two microphones. It can perform common computer vision tasks such as object identification, facial recognition and sentiment analysis, using a set of ready-trained or custom machine-learning models.

It comes with the AWS Greengrass Core – software for local computing services – and a new service, SageMaker, which is intended to help data scientists work with machine-learning models.

The integration of the camera with new and existing AWS products is intended to diminish the need of businesses to look beyond Amazon tools to develop their own machine-learning products. The camera is also compatible, however, with the most popular open-source machine-learning frameworks, such as Facebook’s Caffe2, or Google’s TensorFlow.

“With eyes, ears and a fairly powerful brain that are all located out in the field and close to the action, [DeepLens] can run incoming video and audio through on-board deep-learning models quickly and with low latency, making use of the cloud for more compute-intensive, higher-level processing,” said the blog post.

“For example, you can do face detection on the DeepLens and then let Amazon Rekognition [which performs image and video analysis in apps] take care of the face recognition.”

At the AWS re:Invent conference in Las Vegas, AWS CEo Andy Jassy said: “You can program this thing to do almost anything you can imagine, so you can imagine programming the camera with computer vision models, so if you recognise a license plate coming into your driveway it will open a garage door, or you can program an alarm if your dog gets on the couch.”

Matt Wood, general manager of AI services for AWS said: “DeepLens runs the model directly onto the device. The video doesn’t have to go anywhere. It can be trained with SageMaker and deployed to the model.”

At the conference, it was also announced that Alexa – Amazon’s voice-operated smart assistant – would be brought to Australia and New Zealand in early 2018, and that it would soon be able to accept Amazon Pay for purchases within the Alexa Skills Store.

Recent articles

Info Message

Our sites use cookies to support some functionality, and to collect anonymous user data.

Learn more about IET cookies and how to control them

Close