Blog

Week Ending: 25th June - A Roundup in I.T. & Tech News

Back to Blog

WEEK ENDING: 25th June – A ROUNDUP IN I.T. & TECH NEWS

Summer has officially arrived, if you’ve been busy enjoying the sun you may have missed the latest developments from the I.T. and tech world. Don’t panic, we’re here with our weekly round-up to make sure you’re up to date.

This week we look at the new self-driving delivery programme currently being explored by Ford and Hermes, how Facebook is using AI to detect the most dangerous crime of the future, and concerns over plans to use live facial recognition in CCTV.

Let’s get you up to date.

Ford partners with Hermes for self-driving delivery programme

Popular car manufacturer, Ford, has this week announced its new self-driving vehicle research programme which is designed to help businesses in Europe understand how autonomous vehicles can benefit their operations. Partnering with delivery company Hermes, the programme aims to better understand how other road users would interact with an apparently driverless delivery van.

The vehicle created for this programme is the specially adapted Ford Transit, which features sensors that mimic other self-driving vehicles plus a ‘human car seat’ in control of the vehicle. This enables an experienced, hidden driver to drive while giving the impression to others around that there is no one at the wheel.

So how would it work? Pedestrian couriers will support the delivery van via a smartphone app which allows them to hail the vehicle and remotely unlock the load door once it is safely parked. Once inside the vehicle, voice prompts and digital screens will direct the courier to a locker that contains the parcels that need to be delivered.

Lynsey Aston, at Hermes, commented,

“We’re excited to collaborate with Ford on this proof-of-concept trial, which is all about understanding the potential for autonomous vehicles and if they have a role in delivery in the longer-term future. We’re constantly innovating to incubate and then explore concepts like this, and we look forward to the initial findings, which will no doubt be useful on an industry-wide level.”

This research will allow Hermes and other businesses to begin designing how their teams could work alongside driverless vehicles, including the development of apps.

Read more here.

Facebook can now detect ‘the most dangerous crime of the future’

Social media giant, Facebook, has developed a model to detect when videos are using ‘deepfake’, and the algorithm used to create it. Deepfake is a term used to describe AI-generated fake videos and is considered one of the most dangerous crimes of the future.

Detecting a deepfake relies on knowing whether an image is real or not, unfortunately the amount of information available to researchers to do so is somewhat limited. However, Facebook’s new process relies on the unique patterns behind an AI model that could generate a deepfake.

By running the video or image through a network of ‘fingerprints’ left on the image, such as noisy pixels or asymmetrical features, Facebook can identify its ‘hyperparameters’.

I think we’re all thinking the same here, what are hyperparameters?

Facebook explained:

“To understand hyperparameters better, think of a generative model as a type of car and its hyperparameters as its various specific engine components. Different cars can look similar, but under the hood they can have very different engines with vastly different components. Our reverse engineering technique is somewhat like recognising the components of a car based on how it sounds, even if this is a new car we’ve never heard of before.”

Some examples of high profile deepfakes include a manipulated video of Richard Nixon’s Apollo 11 presidential address and Barack Obama insulting Donald Trump. These videos may be seen as trends or pranks, but many can lead to more complicated issues. Deepfake software is easy to customise and allows malicious actors to conceal themselves.

Explore more here.

Concerns over live facial recognition technology in CCTV cameras

The Information Commissioner, Elizabeth Denham, has this week announced her concerns over plans to roll out live facial recognition technology in CCTV cameras.

She stated that allowing CCTV to recognise people’s faces in real time could be used inappropriately, excessively, or even recklessly.

The controversial technology has been under the spotlight in recent years over fears it may invade people’s privacy, as well as questions about algorithm bias and whether it could create unfair treatment of individuals.

Commissioner Elizabeth Denham commented,

“We’re at a crossroads right now, we in the UK and other countries around the world see the deployment of live facial recognition and I think it’s still at an early enough stage that it’s not too late to put the genie back in the bottle. When sensitive personal data is collected on a mass scale without people’s knowledge, choice or control, the impacts could be significant.”

The collection of sensitive personal data on a mass scale is widely debated. Privacy activists argue that people should be able to visit shopping centres or take their children to a leisure centre without having their biometric data collected and analysed with every step they take.

The Information Commissioner’s recent published report details the concerns and legal issues organisations should be aware of before using this technology.

There are many positives to using this technology, with the idea behind it being that criminals who are currently on the run could be identified easier and quicker, but does that outweigh the negatives? What are your thoughts about using live facial recognition in CCTV?

Find out more here.


Those were some of this week’s biggest stories in I.T. and tech, but if you want more content, follow us across our four social media channels.