The ClothCap system was developed by a team at Max Planck Institute for Intelligent Systems (MPI-IS) and is able to scan a person and take photos of the clothes they are wearing to superimpose it on someone else.
Dr. Gerard Pons-Moll, research scientist at MPI-IS and lead on the project, said: "Our approach is to scan a person wearing the garment, separate the clothing from the person, and then rendering it on top of a new person. This process captures all the detail present in real clothing, including how it moves, which is hard to replicate with simulation."
The systems that existed previously weren't able to create animated people with the new clothing like ClothCap can.
Dr. Sergi Pujades, postdoctoral researcher at MPI and one of the authors, added: "The software turns the captured scans into separate meshes corresponding to the clothing and the body ...
"Other methods capture people in clothing, but nobody could realistically animate new subjects with the captured clothing."
Whilst Pons-Moll shared: "Motion capture revolutionized the fields of animation, gaming and biomechanics. Much in the same way that motion capture replaced a lot of manual editing of motion from scratch, techniques like ClothCap could replace the way clothed characters are edited and animated digitally."
And there is hope this software could be widely used by the clothing retail industry in the future.
Michael Black, director at MPI-IS, explained: "This scanner captures every wrinkle of clothing at high resolution. It is like having 66 eyes looking at a person from every possible angle ...
"First, a retailer needs to scan a professional model in a variety of poses and clothing to create a digital wardrobe of clothing items. Then a user can select an item and visualize how it looks on their virtual avatar."