Thanks to the help of machine learning, creating great looking video is easier than ever. It even seems to be getting easier at an accelerated pace lately, and for those who don't have preexisting tools or skills to get a desired effect, it's a big advantage. Sometimes, cost of upgrade equipment might be a limiting factor, but other times, it could simply be skill level, or lack of time to invest in learning those skills. So, Google continues to help video creators of all levels.
With technology simply called "mobile real-time video segmentation", Google uses machine learning to help separate the person in a scene from the background, resulting in the creation of two separate layers that can be manipulated independently.
In an animated example shown (at the URL below, and seen in still form here), the effect isn't perfect, but it's certainly impressive at the level it's at. This isn't post-processing, which would definitely be more accurate, but on-the-fly number crunching to give the desired effect.
One aspect of this feature that's really cool is that the backdrop doesn't act only as a backdrop. The theme (or scene) you choose could also affect your own layer, so a backdrop of a sunny day could brighten your face, and likewise, a nighttime scene will apply appropriate tones.
Segmentation example without a background
Currently, this feature is only available to select YouTubers in a limited beta, tying into the service's stories feature. No matter the mood or message, it looks like there will be a suitable enough scene to complement it.
With Google's tech, scenes can be analyzed at up to 100 FPS, to deliver a much more accurate representation of what you want people to see. Some locations and lighting conditions (and even your hair) will help dictate how good the end result will look, but as with all things machine learning, there's always room for improvement, and that improvement is probably right around the corner.