NSBE50 in ATL!: Responsible AI

This year NSBE 50 was hosted in Atlanta, Georgia. The following post will sum up my findings and any thoughts of the trip.

Responsible AI

I attended a workshop hosted by Google. The focus was responsible AI in the workplace. To my understanding, responsible AI is the area of research that is focused on developing an aligned, and safe use cases of AI in our daily lives.

AI is going through major changes every day and our policy-makers are lagging behind in regulation, protection, and understanding of these technologies. It baffles me sometimes how abstracted current policy-makers are from certain topics. They are making real decisions based on information from a second hand source 🤯

The speakers (2) worked on the research side and on the development of the Google Pixel Camera. They brought interesting insights from their work. Google has been competing and innovating in the development of the smartphone camera. They implemented features such as night mode, Super zoom, and recently Best take”.

Best take is the notion of using AI to intelligently curating and creating the “perfect” moment from a group of a photos taken of the same scene. Everyone has taken a group photo and often times there is never one photo were everyone is on the same page.

In comes Best take - having the ability to grab everyone’s face and placing into one photo - A moment that never existed. Immediately I began thinking of the philosophical implications of this notion. What is reality? Having AI in the loop alter and influence our perception of things safe?

We often rely on photos to trigger and relive memories of the past. One if we use a photo and begin to remember things differently. Scary!

This scenario was very good at capturing the important of responsible AI and why these considerations need to be taken on all levels of the development cycle. I know all too well that engineers are too far away from the real world to even think about how the average person may use their work. Blindy code away and a system that will transform lives - sounds cool but there needs to be solid moral compass in hand.

The second good point regarding responsible AI is this analogy of AI made by a speaker.

Imagine AI to be a nice powerful car. It has all the amenities, powerful, fast, etc. But the only thing you are missing are the brakes. Apart from the tires, and engine. The brakes of any vehicle is one of the most important component of the car. Responsible AI is ensuring there are the right breaks on the car to prevent future crashes.

I find both these points to be extremely powerful because it puts into context the importance of thinking in the work you do. Profits are at the forefront of most decisions made by companies, and we need to start humanizing ourselves, our society, and our world.

svntii