Transcript
This transcript was autogenerated. To make changes, submit a PR.
Hi everyone.
Thank you for joining me today.
My name is Dmitry Nikitin and I am the client branding team leader at QuadQuad.
Today we are going to talk about a topic that's essential for
businesses working with P2P clients.
White Label Android apps, their development,
customization, and maintenance.
I will share insights, challenges, and lessons learned from my experience
working with White Label solutions.
let's dive in.
What is White Label and who will find this talk useful?
White Label refers to a business model where a single codebase serves
multiple products, each customized and branded for different clients.
This Instead of developing an app from scratch, businesses use ready
made solutions that can be tailored to their needs, allowing the first
deployment and cost efficiently.
This approach enables various companies to offer unique looking
applications while relying on the same underlying technologies and ensuring
scalability and easy maintenance.
This talk is for mobile developers, tech leaders, and product managers working
with multi brands or multi product apps.
Whenever you're building white label solutions, internal business apps,
or multiple products from a single codebase, you will find useful insights.
Today we will cover how to structure code for flexibility and scalability,
design systems Applications remain diverse yet consistent.
Enabling separate deployment for different products.
Efficiently testing customizing builds and pre scale.
Streamlining and delivery to Google Play Store.
Most of this topics is quite extensive and it's own and could
be a topic for a separate talk.
I will try to convey the Key ideas and, challenges we face it, so that
you can avoid them in the future.
Let's start with how to fit unrelated features.
Some of which may even contradict each other into a single project.
Probably no one would want to use an app that looks like this.
What's even more concerning is that the codebase behind such an app.
It's likely to be in an even worse state.
Making maintenance and improvements a nightmare.
We will try to make it better.
Maybe even better than what's on the slide, but, more,
architecture and unified.
Just like with any other architecture.
Related questions.
Unfortunately, there is no silver bullet or magic checkbox in Android Studio.
It all depends on how well the project's architecture supports such changes.
One of the widely accepted approaches in Android development
is multi modal projects.
Over the past few years, many technical talks have been dedicated to how to
structure them properly, how to scale them, and how to maintain multiple
build systems when Gradle can no longer handle the sync phase efficiently.
Here I want to highlight that this architecture allows For relatively
independent changes, we can swap out the model for different
applications even at run time.
However, for this to, to work, modelists need to have a unified entry point.
This could be an existing framework like decomposed by Cardi one or three
by Uber or your own custom solution.
After structuring the project, we should be able to enable additional features.
For example, Google apps.
Feature toggling helps us with this.
Feature can be configured at a build time or dynamically via toggles, the
same approach used for ABE testing.
On the one hand, highly customized changes for specific brands can
remain local to avoid cluttering the admin panel, increasing traffic
or adding run time uncertainly.
On the other hand, if a feature provides useful across brands, it
can be moved from hard coded config to a server side configuration.
This key is to separate experimental and operational features in the
admin panel to ensure our data experiments and performance.
Properly removed, but despite all that, the implementation doesn't
need to distinguish future types, making future modification easier.
Moving on, what about UI customization?
One of the solutions that help us quickly and predictably adjust colors.
is a palette management tool, the design system.
All app tokens can be divided into a fixed number of groups
based on their usage context.
These groups are generally universal and independent of specific business logic.
Examples include, surface token for different Z levels and states, text tokens
with verifying levels of importance, dedicated token for icons and borders.
Thanks to this, context based approach, we can predefine how each token should
behave to ensure proper contrast.
On different backgrounds.
In simple terms, we take the color value in HSL format, keep hue and saturation
unchanged and adjust lightness and until the element reaches the desired
level of visibility and contrast.
This allows us to seamlessly switch between light and dark teams and even
completely recover the app into an entirely different scheme when needed.
at the same time, it's important that the application use Vector
graphics is covers ference, the same tokens that way, various.
The icons will also be recolored to the correct colors out of the box.
The algorithm is quite similar to what material components provide,
and some token names even match.
However, since the design system in our company spans across Android,
iOS, desktop, apps, and website as well, the solution needs to be
fully cross platform and, unified.
But not all color customization can be handled solely through
the contextual palette.
Imagine that for the dark theme, a client wants a specific button to
have a completely different color.
The contextual palette knows nothing about individual buttons, so an
additional layer of abstraction is introduced the component palette.
This means we define tokens for specific components like main
calendar button, which is term still referring the contextual palette,
but may, for example, use an inverse variant as shown in the illustration.
And the key to success here is that the unified approach allows us to
predictably generate the palette.
By integrating a vector preview of a scope app, of a core app screen
across all platforms into the admin panel where we configure colors.
We can immediately verify that the result looks as expected.
Meanwhile, under the hood, the system is already generating a ready to use palette
to assemble the entire application.
Next, we need to plan our app release.
But when?
There are many clients, and each come with their own requests at random times.
It's great if they're patient enough and, can wait even for minor changes.
Fixes until the end of the next iteration, but more often than not,
that's not the case and the iteration length becomes a significant delay.
At the same time, we can't release completely independently either.
Full regression testing, even with substitution automation,
can still take a lot of time.
And the more we spend on regression, the lower, the fewer new features
we can deliver, unfortunately.
We found a solution to this issue by adopting a trunk based approach and on
demand release from a stable version.
What is trunk?
Unlike GitFlow and GitHubFlow, in trunk based development changes
are merged into the main branch in small frequent increments.
Of course, this comes with some requirements.
It demands extensive test coverage, which needs to be executed in CIA
pipelines during merge requests.
However, in the long run, this approach helps avoid situations where we need
changes from two different branches, but aren't ready to merge them yet.
When we release a build We tag the corresponding comment with
the appropriated version tag.
we have regular releases with the application is fully tested and on
demand releases for specific brands, usually containing minor changes.
Here we branched off from tag 1.
0. 0, added the required modification and published version 1.
0. 1. A critical point here, all changes may still go into trunk first, no
matter how tamping it might be applied.
A fix directly to the release branch.
Doing so is shooting ourselves in the foot.
Otherwise, we risk forgetting to merge the fix back into the trunk, leading to
a branch house and potential regression.
And by not releasing directly from the latest trunk, we gain a key adventure.
The scope of undefined changes remains minimal.
This significantly speed up impact analysis, allowing us to release
faster with higher confidence.
Testing white labels, friends, is an additional layer built on top
of the multi stage testing already established for the main product.
Here, both comprehensive unit and integration test coverage are important.
Along with UI end to end scenario expansion which is particularly
relevant for B2B cases.
For an automation perspective, adopting all existing tests for every new
brand would be quite challenging.
However, some limitations can be addressed.
For example, test users.
We shouldn't create them manually and hard code credential into the test,
as this would require repeating the process for every new white label app.
Instead, it's better to teach the test to prepare a user in
required state dynamically.
Let's make it happen.
It might require a dedicated test API to set up the user's state before
launching the app with partially or fully complicated KYC, deposit funds,
etc. One challenge, however, is that the initial data of the user may vary.
For instance, each brand might have different available registration country.
Parameters like this can be extracted into a separate class,
allowing us to parameterize the test runs more efficiently.
Additionally, other brand specific values may need to be accounted for,
such as, expected app name, UI elements like icons, animations, and more.
configuration ensure that no circle or critical data is overlooked when adding
a new brand, even during compile time.
At the same time, running the full test Scope for every brand requires
huge resources, both for maintaining tests and running them in CI.
In most cases, these tests don't reveal issues, as core functionality usually
doesn't depend on a specific brand.
Our solution to address this is creating a separate scope.
Specifically for brand related checks.
Aside from mandatory end to end checks, without which we cannot release an app.
This includes app identity checks, ensuring correct
branding elements are applied.
Social authentication, as some apps may use different Google
and Facebook admin consoles.
Push notifications for the same reason and terms and conditions and other dependent
settings, brand dependent settings.
By analogy with one demand, on demand release practice, where we
perform impact analysis of changes relative to the verified tag, we
applied a similar approach here.
We fully test only the first application in the given release and for the sub
sequence ones we run a smog brand scope to verify only the brand specific aspects.
We have tested the application, now it's time to publish it.
In general, any routine task should be considered a candidate for automation.
And if you are dealing with multiple builds, automation
becomes even more essential.
Watch me here.
There are already CI plugins for that.
And you can scan QR codes To see guides for Jenkins and GitLab CI
and I bet there are similar solution for other popular CI System as well.
So let's go through how to do this on Jenkins.
Everything is simple We call a predefined command and pass the build command Key
points, don't trust the command name.
It's actually support not only APK, but a bundle as well.
We pass the obfuscation mapping file, so that we can later
restore cache stack trace.
We specify the release channel.
Beside production, there is also beta and internal.
We can even define the reload presentation to add Louis nodes.
What about the first, parameter on the slide?
Thank you There are credentials you need to obtain by generating a new
service account key for your project in Google Cloud Console and generation
granting this account appropriated permission for uploading builds.
What about verifying the publication?
Of course, you can use the internal testing track for it, but only if you
are already actively using this tab.
If not, I recommend creating a separate test application in
the same Google Play Console.
There is an issue, once, once you upload a build to the internal or beta track,
Google may require continuous updates even if you have up tested for such purpose.
For example, Google might force target SDK level increases,
permission removals, and so forth.
Other policy updates, even pausing the track doesn't help.
It looks like a bug, but many developers have encountered these problems.
So if you don't plan to use the track long term, it's best not to enable it at all.
Let's sum it up before our brains reach full capacity.
We have discussed how modularization changes health structure of the codebase,
making it cleaner and easier to maintain.
We have addressed how to, how a consistent design system brings clarity
and removes guesswork from UI decisions.
We have highlighted the importance of independent releases and how they
reduce delays for client specific needs.
Test automation ensures reliability and efficiency.
And publishing automation simplifies app development, making
the release smoother and faster.
Each of these points reflects how we streamlined the development and release
process for white label applications.
This is my cat, Ragnar.
I asked him to make my presentation less tangled, but he only
managed to create one slide.
Claiming he is very busy with important cat business.
I don't want to judge him for that.
Thank you all for your time and attention.
I truly appreciate it and I hope you found this presentation useful.
Have a great day and see you next time.