Title Place Here

Lorem ipsum dolor sit amet, consectetuer adipiscing elit, sed diam nonummy nibh euismod tincidunt ut laoreet dolore magna aliquam erat volutpat. Nullam risus. Aenean sed urna ut libero lobortis luctus. Phasellus pulvinar. Cras ante lectus, hendrerit tempus, sodales in, ullamcorper in, enim. Aliquam interdum luctus quam. Morbi lacinia vulputate augue. Pellentesque a leo quis purus condimentum pulvinar.

more articles

Where research means creativity



MIRALab SARL offers customized solutions based on our technologies or according to the wishes of our clients. MIRALab SARL also actively participates in European research projects, as part of its innovation strategy. Some examples of MIRALab SARL's technologies are:


Virtual Clothing

Cloth modeling has been a topic of research in the textile mechanics and engineering communities for a very long time. However, in the mid 1980s, researchers in computer graphics also became interested in modeling cloth in order to include it in the 3D computer generated images and films. The evolution of cloth modeling and garment simulation in computer graphics indicates that it has grown from basic shape modeling to the the modeling of its complex physics and behaviours. In computer graphics, only the macroscopic properties of the cloth surface are considered. Physical accuracy is given less importance in comparison to the visual realism. However, a trend of employing a multidisciplinary approach has started, and the community of textile engineering and computer graphics have begun to combine their expertise to come up with solutions that can satisfy that of both communities. MIRACloth, a system developed at MIRALab can be used for building and animating the garments on virtual actors. It is a general animation framework where different types of animation can be associated with the different objects - static, rigid, and deformable (key frame, mechanical simulation). The methodology for building garments relies on the traditional garment design in real-life. The 2D patterns are created through a polygon editor, which are then taken to the 3D simulator and placed around the body of a virtual actor. The process of seaming brings the patterns together and the garment can then be animated with a moving virtual actor.



Virtual Heritage

Today's cutting edge technology in computer graphics and virtual reality offers a number of tools, where, rather than trying to describe an inaccessible building or structure using words, sketches, or pictures, the entire scene can be reconstructed in three dimensions and viewed from various angles and view points. However, most of these virtual environments display static 3D architectural models that can be navigated in real time in a passive manner and offer little for actual exploration, interaction and participation. Active engagement and participation requires human interaction and communication between the user and the people of the era of the site. This demands virtual embodiments of the participant and the people.

The basic objective of MIRALab's research is to implement a complete framework to bring cultural heritage structures to life that are inaccessible, either by time (i.e. they no longer exist, or they have not been created) or by distance and restriction (i.e. they are too far to be accessed by normal means, or they have been restricted to the general public). To determine the direct purpose, the main goal of this research is to innovate and implement the concepts of:

I."Virtual Heritage", which is the use of computer-based interactive technologies to record, preserve, or recreate artifacts, sites and actors of historic, artistic, religious and cultural significance and to deliver the results openly to a global audience in such a way as to provide formative educational experiences through electronic manipulations of time and space.

II."Inhabited Virtual Cultural Heritage", in the field of conservation and restoration. Inhabited Virtual Cultural Heritage is a novel way of conservation, preservation and interpretation of cultural history. By simulating an ancient community within the virtual reconstructions of a habitat, the public can better grasp and understand the culture of that community.

This MIRALab research project is aiming to become a globally accessible and truly virtual 3D environment; virtual means not a series of stitched images or panoramics but actual, realistic 3D models. It is envisioned to be not only an interactive entertaining experience, but also more importantly an educational tool that could transport viewers into the sites, learning about their rich culture and history firsthand.




Networked Collaborative Virtual Environments

Networked -Collaborative- Virtual Environments (NVE) have been a hot topic of research for some time now. A NVE can be defined as a single environment which is shared by multiple participants connected from different hosts. However, most existing systems restrict the communication between the participants to text messages or audio communication. The natural means of human communication are richer than this. Facial expressions, lip movements, body postures and gestures all play an important role in our everyday communication.

Part of our research in this field thrives to incorporate such natural means of communication in a Virtual Environment as well as to develop more realistic environments. Realism not only includes believable appearance and simulation of the virtual world, but also implies the natural representation of participants. This representation fulfills several functions:
- The visual embodiment of the user.
- The means of interaction with the world.
- The means of feeling various attributes of the world using the senses.

NVEs with virtual humans is emerging from two threads of research with a bottom-up tendency. First, over the past several years, many NVE systems have been created using various types of network topologies and computer architectures. The practice is to bring together different previously-developed monolithic applications within one standard interface; and consists of building multiple logical or actual processes that handle a separate element of the VE. Second, at the same time, virtual human research has developed to the level to provide realistic-looking virtual humans that can be animated with believable behaviours in multiple levels of control.



Hair Simulation

One of the many challenges in simulating believable virtual humans has been to produce realistic looking hair. The virtual humans, two decades ago, were given polygonal hair structure. Today, this is not acceptable. Realistic visual depiction of virtual humans has improved over the years. Attention has been given to all the details necessary for producing visually convincing virtual humans and many improvements have been done to this effect.

On a scalp, human hair are typically 100,000 to 150,000 in number. Geometrically they are long thin curved cylinders having varying thickness. The strands of hair can have any degree of waviness from straight to curly. The hair color can change from white to grey, red to brown, due to the pigmentation, and have shininess. Thus, difficulties of simulating hair stem from the huge number and geometric intricacies of individual hair, complex interaction of light and shadow among the hairs, the small scale of thickness of one hair compared to the rendered image and intriguing hair to hair interaction while in motion.

One can conceive three main aspects in hair simulation - hair shape modeling, hair dynamics or animation, and hair rendering. Often these aspects are interconnected while processing hairs. Hair shape modeling deals with exact or fake creation of thousands of individual hair - their geometry, density, distribution, and orientation. Dynamics of hair addresses hair movement, their collision with other objects particularly relevant for long hair, and self-collision of hair. The rendering of hair involves dealing with hair color, shadow, specular highlights, varying degree of transparency and anti-aliasing. Each of the aspects is a topic of research.



Medical Simulation

Our research here consists of understanding and visualizing the functionalities of human articulations. In order to achieve this goal, a particular articulation, the hip, has been selected. From MRI data we shall reconstruct: segment bones, cartilage, ligaments and muscles. Biomechanical inter-relationships will also be modeled and the entire structure will be generalized to any individual. Medical doctors will benefit from this technology as they will be able to visualize (in 3D), observe motion and easily diagnose possible problems for the hip of a specific individual.

Our motivation is derived from the research developed, and the know-how acquired, during our former ESPRIT European medical project: CHARM (Comprehensive Human Animation Resource Model). During this project a generic 3D solid shoulder was developed, along with additional elements. The shoulder was constructed as a physically based model and able to produce a simulation of movements and deformations. Modeling and simulations were validated using medical data. This project was the first to use the Visible Human data set and it has opened an innovative way to 3D visual simulation in the medical computing field.

From this experience, we propose to reconstruct and simulate a functional individualized 3D hip of any patient from generic models. In order that it may be applied to an everyday surgeons' diagnosis practice, the resulting virtual 3D hip must be:
· Precise so that diagnosis can be accurate
· Validated so that diagnosis can be reliable
· Aware of the current diagnosis practice and adapted to it, so that the visualization guided diagnosis procedure can be efficiently integrated

To achieve this objective, three main issues will be addressed:
1. The first issue is to create, from medical images, a database of hip models that are generic for both male and female, in addition to being animatable (i.e. they contain topological information). Firstly, the focus will be on the bone structure, cartilage and ligaments and then extend to muscles and tendons. The generic hip model will be used to guide and compensate the incomplete data that will be obtained for actual patients. A robust methodology for generic model surface reconstruction from medical images will be designed as well as a classification of the anatomical variations of the hip. The database of generic models will be defined in order to provide an average patient covering the anatomical variations. The size of the database will depend on the anatomical variations and on the level of precision of the individual patients' data.

2. The second issue is to be able, from any available individual medical image data of the hip, to reconstruct an individual hip from the generic models. A usual daily MR scanning session takes about 20 minutes, compared with the 2 hours spent to obtain our current generic model data set from a volunteer with an optimal spatial and contrast resolution. As a result, the usual image data set acquired from individual patients does not contain the complete information required for a full 3D surface reconstruction. It would not be reasonable to expand a usual patient's MRI data acquisition session due to the limited MR scanner availability in the clinical practice and also with respect to the health of patient. Therefore, the generic based reconstruction approach will guide the patient's hip reconstruction from the sparse data available from the current MRI data acquisition protocol.

3. Also we will define a biomechanical model to simulate the motion of the hip and to understand the possible malfunctioning of the articulation of individual patients. The biomechanical model will focus on the interrelations of the various anatomic elements (bones, cartilage, ligaments, muscles and tendons) involved in the joint articulation and their influence on the range of motion of the hip. Dynamic MR images of patients will be used as an "internal" motion capture technique to extract information from the different anatomical components of the hip for the biomechanical model and to evaluate the accuracy of the individualized virtual hip when in motion.



Social Robotics

MIRALab's research with social robots focus on several aspects of social interaction, such as affective computing, dialogue management and decision making, expressive behavior generation and facial expression recognition. Using the expertise gained over years in the field of interactive virtual characters, MIRALab applies realistic facial animations, personality and emotion models on a highly realistic human-like robot.

Our current research work focuses on long-term interaction with a social robot that can interact with users multiple times over a long period of time and that can establish engaging interpersonal relationships with them. This research involves remembering people, their names and important past exchanges. Interaction architectures with socially intelligent virtual characters and robots so far focused on short-term interactions and very few researchers considered the challenges of long-term social interaction. This is because creating such a system requires an interdisciplinary view and takes place at the intersection point of many fields such as human-computer and human-robot interaction, computer vision and animation, artificial intelligence, robotics and social sciences. In our current system, human-like robot Eva can interact with users by recognizing and remembering their names and can understand user’s emotional states through facial expression recognition and speech. Based on user input and her personality, Eva produces appropriate emotional responses and keeps a model of the long-term interpersonal relationships between her and the user. For example, if the user is friendly, Eva can remember this in the future interactions with the user and answers accordingly. She can also behave based on her goals and can plan future actions based on past interactions with the users. Eva’s animation system contains lip-synchronized speech, facial expressions and emotion-driven gaze and head behavior. The realism of the face lets us to animate the face using techniques from computer animation such as data-driven methods (e.g. motion capture) and animation blending. Using highly realistic robots in social interaction research has the advantage of setting a higher standard for evaluation as any breakdown in the behavior is captured easily by users and better help to modify the underlying social, cognitive and expressive models.