Custom Prompts - Costs and Calculations
Custom prompts are a great feature within the Power Platform, but how much are they really costing us? In this post, I’ll detail how billing works and explain how you can start to accurately calculate the costs for your next project. This information is especially important when using custom prompting at scale, as understanding the costs correctly may directly influence your choice of using AI Builder or Azure OpenAI services for your solution.
Key Takeaway: To be able understand and accurately calculate the cost of using AI Builder custom prompts within Microsoft Power Platform solutions.
Pre-requisites: Familiarity with AI Builder custom prompts/‘Create text with GPT using a prompt’, Power Automate and Power Platform Admin Centre.
To begin with, it’s essential to understand two key concepts: Tokens and AI Builder Credits; as billing is based around the number of tokens that you process through your prompts.
Tokens & AI Builder Credits
What are Tokens?
You can think of tokens as the basic building blocks of language that large language models (LLMs), like GPT, use to process and generate text. A token can be as small as a single character or as large as a whole word, depending on the context. When you create a custom prompt within Power Platform, your input prompt (and the output it produces) are made up of these tokens.
A great way to familiarize yourself with tokens is to make use of OpenAI’s tokenizer tool, found here. The tool enables you to paste in your own block of text and it will then determine the number of tokens that make up that block. According to Microsoft, 700 words is roughly the equivalent of 1000 tokens.

OpenAI’s Tokenizer tool in action – the colours are highlighting how many tokens are within the input text. My example was made up of 53 tokens.
What are AI Builder Credits?
Credits within AI Builder can be thought of as a kind of currency that you can spend to make the AI do things, like finding objects in an image or extracting text, and each feature uses up a certain number of these credits when you use it. Custom Prompts just so happen to be one of AI Builder’s features, so every time you use a custom prompts within your solution you are spending AI Builder credits.
Depending on what Microsoft licensing your organization holds, those licenses will entitle you to a certain amount of AI Builder credits per user. Let’s take the ‘Power Apps Premium’ license as an example, every user who holds this license is entitled to 500 credits. The credits, however, are pooled at tenant-level, and not allocated back per user (this is by default). To see how many credits you have available, and to monitor usage, head on over to the Power Platform Admin Center and go to Resources > Capacity > Summary.

On the Summary page, I can see that I had 500 credits available and I had already consumed 180.
For a breakdown of credit entitlement based on each Microsoft license, take a look here.
If you need to acquire more credits for the tenant, they are sold in ‘AI Builder Capacity Add-on’ packs. More details here.
Billing (Consumption Rates)
Now that we’ve got an understanding of Tokens and AI Builder Credits, we can start to look at the consumption rates for custom prompts.
Power Platform Licensing Guide
The Power Platform licensing Guide, found here, is your monthly one-stop-shop for all things licensing in the Power Platform (Microsoft nailed the name of the document didn’t they?!). Within this guide, you’ll find an ‘AI Builder Rate Card’. The rate card (shown in the next section) will show us how many credits are consumed each time we use one of the available GPT models within our custom prompts. At the time of writing, GPT 3.5 and GPT 4o are the two available models.
Tokens, Credits and Dollars
Here’s the important bit of ‘theory’ when it comes to billing: Tokens get converted to credits, and credits get converted to dollars. If you take a look at the rate table below, and let’s take GPT 4o as the example, you’ll see that for every 1000 tokens that you put into your input prompt, you’ll be consuming 20 AI Builder credits. The same applies for your output, but it’s billed at a higher rate of 60 credits per 1000 tokens. Between input and output, at 1000 tokens a piece, you will have consumed 80 credits (that’s $0.04, based on 1000 tokens in and out).

The AI Builder Rate Card from November’s edition of the Power Platform Licensing Guide.
Rounding Up
Here’s the thing: A thousand tokens is quite a large chunk of text, and there’s going to be a lot of scenarios where you may use less than that. Unfortunately, the rate card example above isn’t just a nice and easy-to-digest example, it is in fact the minimum rate that you’ll be billed. For every input prompt, and it’s output, Microsoft will round up both sets of tokens to the next thousandth.
For example, your input could be 50 tokens, 200 or 999 – you’ll still be consuming credits as if it were 1000 tokens. If you happen creep over that 1000 token threshold, you’ve guessed it, you’ll be billed up to the next thousandth – at 2000 tokens. But wait, there’s more…
The System Prompt
May I now drag your gaze to the small print at the bottom of the rate card table. The key phrase that I need to point out to you is “Input tokens include both customer and system prompts”. Hold on, what is a System Prompt?

A System Prompt is a form of meta-prompt that gets processed along with your input prompt. I’ve been poking Microsoft for more details on this mysterious meta-prompt however they’ve been tight-lipped on the contents. All I know so far is that it’s a supporting prompt (I am still seeking a better answer). Anyway, the important point to note is that you’re also billed for this meta-prompt and based on my testing, it’s not tiny.
Testing the System Prompt
In the screenshot below, I have created a deliberate 20 token input that returns a single word (and single token) output, I’m simply determining what colour the sky is…

Here in the UK, I was still expecting the GPT to return the word ‘Grey’.
I took my 20 token input and created a custom prompt with it, subsequently putting it to the test in Power Automate. Once my prompt had been used in a Cloud flow, I opened up the Flow results and found the following:

The results…
My 20 token input prompt (‘promptTokens’ within the screenshot) had become 998 tokens in size! An uplift of 978 tokens over my original input within my custom prompt. That’s one chunky System Prompt!
I previously ran the same process with a 50 token prompt, here is the input and the results:


The results again…
1028 tokens in size. Again, an uplift of 978 tokens. I continued to see the same result through further testing of other sizes of prompts.
Based on what you now know about the billing process, what starts out as a mere 50 token input prompt with a 1 token output, will now be be billed as a 2000 token input with a 1000 token output. Ouch.
Results (Capacity Report)
To verify my results, I ran what’s called a ‘Capacity Report’ in the Power Platform Admin Center to understand how many AI Builder credits my two Cloud flows had consumed (as demonstrated in the screenshots above). Running a Capacity Report for AI Builder credits is quite straight-forward, simply head to the Power Platform Admin Centre and head to the ‘Capacity’ tab. You then need to click on the ‘Download Reports’ button and select AI Builder on the following screen to download a consumption (CSV) report.


Here’s the results from my Capacity Report:

As expected, having run the 50 token Cloud flow first, you can see that I was billed 100 credits (‘AIConsumption’ column within the screenshot). As the System Prompt pushed my small 50 token input over the 1000 token threshold, I was charged 40 credits for my input (2000 tokens) and 60 for my output (1000 tokens).
I ran the 20 token Cloud flow second, and at 998 input tokens, my input was billed for just 1000 tokens, meaning it consumed 20 credits for the input (1000 tokens) and 60 for the output (1000 tokens) – consuming 80 credits overall.
Aha, predictable consumption!
Conclusion
Now that we’re able to understand how Microsoft round up token usage and factor in System Prompts for our inputs, we can take this into consideration when pricing up custom prompts. By using the Open AI’s tokenizer tool, we can get an accurate breakdown of tokens for our input, and then add the magical number of 978 to the total (again, at the time of writing). Whatever total you end up with, be sure to round up the it up to the next thousandth – and now you’ll know how many credits your input is going to consume and therefore cost. A final example: If my original input is 1500 tokens, I then add the 978 tokens for the System Prompt to bring the total input to 2478 tokens – I then round it up to next thousandth (that’s 3000), consuming 60 AI Builder credits for an input. Hopefully that’s making sense!
Disclaimer: I fully expect the amount of tokens within the System Prompts to change, it is my understanding that the prompt is changed multiple times a year and the GPT model is also a factor. Some testing upfront, akin to my own examples, should identify the amount of tokens within the System Prompt.
The outputs from our custom prompts are much less predictable, and are fully dependent on the precision of the input. However, when creating your custom prompts, the UI affords you to test your example which will be a good indicator for the amount of text you can expect to receive. Run that result text through the Open AI tokenizer tool to get an idea of how many tokens to expect. Although the output is billed at a much higher rate, thankfully there’s no System Prompt to contend with.
Thanks for reading, and if there’s any mistakes or if you have any questions, just give me a nudge.