This is the latest research work from berkeley ai research group. It is named as "Everybody dance now" because it presents "do as i do" motion transfer approach, so what does that mean in a nutshell? If we have a source video of a person, like in this one, this person is dancing at this moment. After a few minutes of the target subject performing these standard moves, we can transfer these moves to a another target . This can be treated as as a per-frame image-to-image translation with usage of spatio-temporal smoothing. We can use these pose detections as an intermediate representation between source and target therby learning a mapping from pose images of the source to the target subject’s appearance. So we can perform transfer of motion between human subjects in different videos, we have source dancing in some place, and we can use those pose estimations and bring that to different target subjects.

Have a look at this for more details:

Link to research paper [arXiv]

Visit AI Journal for more videos. Don’t forget to subscribe . Stay connected with us on Twitter to stay updated in AI Research. Please support me on Patreon