Posted by Rebecca Franks – Developer Relations Engineer, Tracy Agyemang – Product Marketer, and Avneet Singh – Product Supervisor
Androidify is our new app that allows you to construct your very personal Android bot, utilizing a selfie and AI. We walked you thru a number of the elements earlier this yr, and beginning in the present day it’s accessible on the internet or as an app on Google Play. Within the new Androidify, you possibly can add a selfie or write a immediate of what you’re in search of, add some equipment, and watch as AI builds your distinctive bot. When you’ve had an opportunity to strive it, come again right here to study extra concerning the AI APIs and Android instruments we used to create the app. Let’s dive in!
Key technical integrations
The Androidify app combines highly effective applied sciences to ship a seamless and interesting person expertise. This is a breakdown of the core elements and their roles:
AI with Gemini and Firebase
Androidify leverages the Firebase AI Logic SDK to entry Google’s highly effective Gemini and Imagen* fashions. That is essential for a number of key options:
The Androidify app additionally has a “Assist me write” function which makes use of Gemini 2.5 Flash to create a random description for a bot’s clothes and coiffure, including a little bit of a enjoyable “I am feeling fortunate” factor.

UI with Jetpack Compose and CameraX
The app’s person interface is constructed completely with Jetpack Compose, enabling a declarative and responsive design throughout kind elements. The app makes use of the newest Materials 3 Expressive design, which supplies pleasant and interesting UI parts like new shapes, movement schemes, and customized animations.
For digital camera performance, CameraX is used at the side of the ML Package Pose Detection API. This clever integration permits the app to mechanically detect when an individual is within the digital camera’s view, enabling the seize button and including visible guides for the person. It additionally makes the app’s digital camera options conscious of totally different system sorts, together with foldables in tabletop mode.
Androidify additionally makes intensive use of the newest Compose options, akin to:

Newest updates
Within the newest model of Androidify, we’ve added some new highly effective AI pushed options.
Background vibe technology with Gemini Picture modifying
Utilizing the newest Gemini 2.5 Flash Picture mannequin, we mix the Android bot with a preset background “vibe” to convey the Android bots to life.

That is achieved through the use of Firebase AI Logic – passing a immediate for the background vibe, and the enter picture bitmap of the bot, with directions to Gemini on how one can mix the 2 collectively.
override droop enjoyable generateImageWithEdit( picture: Bitmap, backgroundPrompt: String = "Add the enter picture android bot as the principle topic to the end result... with the background that has the next vibe...", ): Bitmap { val mannequin = Firebase.ai(backend = GenerativeBackend.googleAI()).generativeModel( modelName = "gemini-2.5-flash-image-preview", generationConfig = generationConfig { responseModalities = listOf( ResponseModality.TEXT, ResponseModality.IMAGE, ) }, ) // We mix the backgroundPrompt with the enter picture which is the Android Bot, to provide the brand new bot with a background val immediate = content material { textual content(backgroundPrompt) picture(picture) } val response = mannequin.generateContent(immediate) val picture = response.candidates.firstOrNull() ?.content material?.components?.firstNotNullOfOrNull { it.asImageOrNull() } return picture ?: throw IllegalStateException("Couldn't extract picture from mannequin response") }
Sticker mode with ML Package Topic Segmentation
The app additionally features a “Sticker mode” choice, which integrates the ML Package Topic Segmentation library to take away the background on the bot. You need to use “Sticker mode” in apps that help stickers.

The code for the sticker implementation first checks if the Topic Segmentation mannequin has been downloaded and put in, if it has not – it requests that and waits for its completion. If the mannequin is put in already, the app passes within the authentic Android Bot picture into the segmenter, and calls course of on it to take away the background. The foregroundBitmap object is then returned for exporting.
override droop enjoyable generateImageWithEdit( picture: Bitmap, backgroundPrompt: String = "Add the enter picture android bot as the principle topic to the end result... with the background that has the next vibe...", ): Bitmap { val mannequin = Firebase.ai(backend = GenerativeBackend.googleAI()).generativeModel( modelName = "gemini-2.5-flash-image-preview", generationConfig = generationConfig { responseModalities = listOf( ResponseModality.TEXT, ResponseModality.IMAGE, ) }, ) // We mix the backgroundPrompt with the enter picture which is the Android Bot, to provide the brand new bot with a background val immediate = content material { textual content(backgroundPrompt) picture(picture) } val response = mannequin.generateContent(immediate) val picture = response.candidates.firstOrNull() ?.content material?.components?.firstNotNullOfOrNull { it.asImageOrNull() } return picture ?: throw IllegalStateException("Couldn't extract picture from mannequin response") }
See the LocalSegmentationDataSource for the total supply implementation
Study extra
To study extra about Androidify behind the scenes, check out the brand new options walkthrough, examine the code or check out the expertise for your self at androidify.com or obtain the app on Google Play.

*Verify responses. Compatibility and availability varies. 18+.
