3d rendering humanoid robot with AI text in circuit pattern.

360 degree Camera
One of the few 360 camera rigs at BVE this year

The end of February was the annual Media Team trip to the British Video Expo (BVE) at ExCel in London. In the past, we have used the event to investigate new equipment, chat to other industry experts and learn new techniques from some of the best media professionals in the world. And although the event isn’t aimed at the education sector it is a very good way to gauge both current and upcoming trends.

This year, however, felt like a bit of a surprise. Before the seminars were announced, we were expecting most of the talk and exhibits on the show floor to be about Augmented and Virtual reality. We’re several years into the technology and it is becoming more established both in the media world and education sector. It is something (as can be seen in a  recent blog post) that we are trying to push forward with but was only represented with a smattering of products to try out and no talks (at least on the day we were able to visit).

Instead, it was about the use of artificial intelligence (AI) in the development of media (either through AI itself, machine learning or automated workflows). It seemed like an odd thing to be showcasing so strongly at first, but the more we heard and saw, the more revolutionary it seemed.

The applications for these technologies that were talked about were many, some of which were very useful and relevant to what we are doing here at the University. A short list of uses of AI in the media sector include:

  • Increased analysis of data for improved targeting (increasing revenue streams).
  • Camera vision for a range of applications from automating statistics and replays in televised sports to creating shots and edits for an entire film.
  • Automating workflows

The most interesting one, to us, is the use of machine learning to improve logging of data (adding automatic metadata to the assets we created) and also to automate processes such as transcribing and subtitling videos.

That second one is key to us. A new EU directive has come into place in the last few months that aims to make digital content accessible over the next 2 years. This is potentially a huge amount of work (but also incredibly important). One way of improving the accessibility of video content is to add subtitles and/or a transcript. Over the years we have created lots of videos, so that means lots of subtitles and lots of time!

Machine learning has developed greatly in this area in the last few years and automatic caption generation has gone from hopeless to amazingly accurate in a surprisingly short amount of time. We currently use YouTube to automatically generate subtitles for our videos and the technology has come on leaps and bounds in the time we have been using it. It is still an imperfect solution though (both in terms of the accuracy of the transcription, but also in the process of getting the videos up on YouTube and then the transcript files back). Hopefully, this move towards using technologies such as machine learning will help to create a universal platform for transcription, that can benefit everyone!

BVE 2019 – the unexpected trend towards AI

Post navigation


What do you think? Leave us a comment to share your thoughts...

%d bloggers like this: