Automatic BSL translation for all media
What we can do together?
Together, we are creating a world without communication boundaries. Your involvement can help us revolutionize the way deaf people interact with the world around them. Join our Migam project and let's break down these barriers together, opening new possibilities for the deaf community worldwide.
We are seeking organizations working on behalf of the Deaf who are interested in participating in the initial testing of our avatar.
This initiative is a step towards breaking communication barriers and enhancing accessibility for the deaf community. We aim to collaborate with partners committed to making a significant impact.

How we can do it?
- Know BSL? Become our beta tester.
- Work for an organization advocating for the Deaf? Become our partner.
- Are you an AI engineer wanting to contribute to the project? Join our team.
- Work for an organization where accessibility is key? Become our client.
- Are you an investor looking to invest in socially responsible projects? Let's talk.

What we can achieve?
Through collaboration, we aim to create innovative solutions that enhance communication accessibility for the deaf community. By joining forces, we can develop and refine technologies such as our sign language avatar, making digital content universally accessible. Together, we'll bridge communication gaps, empowering individuals with hearing impairments to participate fully in all aspects of society. Let's work together to build a more inclusive world.

We already achive it

Generative Model
BREAKTHROUGH TECHNOLOGY
The ASLAC is being built around a custom CLIP (Contrastive Language–Image Pre-training) model. This state-of-the-art neural network system efficiently links text and images, enabling zero-shot transfer learning and natural language supervision. The technology also incorporates Transformer models, a class of machine learning models that have revolutionised the field of natural language processing. These models allow for more accurate and contextually aware translations, significantly enhancing the quality of the sign language videos generated.
Technology Overview
Moreover, the technology harnesses the power of OpenPose for skeleton “reading”. OpenPose is a real-time multi-person system that can jointly detect the human body, hand, facial, and foot keypoints on single images. The ASLAC technology uses this system to accurately capture and interpret sign language movements, which are then used to generate expressive 3D avatars using WebGL, WebGPU, Unity, or Unreal based generation techniques.

Datasets
Dataset Juggler and Preparation within ASLAC's Framework
ASLAC's commitment to producing a groundbreaking and efficient sign language translation system hinges profoundly on the meticulous preparation and management of its training datasets. OpenPose, OpenCV, and Python convergent use makes ASLAC's vision real.
1. Harnessing OpenPose's Tri-Model Approach
The foundational step in dataset preparation revolves around the state-of-the-art body pose estimation tool, OpenPose. ASLAC employs three distinct OpenPose models, each fine-tuned to cater to different components of the human anatomy: the body, hands, and face.
We are looking for datasets.
We are currently seeking datasets of recorded sign language statements. Ideal datasets include films and programs from various television producers' archives and NGO studies. We estimate that about 1 million minutes of sign language content are needed for each sign language to enhance our model's accuracy and inclusiveness.
We invite you to collaborate