Why would a Firebase Cloud Function be much slower once deployed than in the emulator, other than cold starts?

Issue

I have a Firebase Cloud Function that parses some HTML among other things. I’ve measured the HTML-parsing method itself, and it takes about 300ms in the emulator on a MacBook Pro. It seems to take about 4-8 seconds when deployed to Firebase, which is not a tolerable duration for my application.

I am using Dart to write my Firebase Cloud Functions using firebase_functions_interop and compiling to JS with Dart2JS. The method that is performing drastically differently between environments is actually Dart’s built-in HTML parsing method. I know cold starts are a common reason for cloud functions taking longer than expected, but it seems like that’s not the culprit here, since I’m explicitly measuring the time it takes to parse the HTML, and that’s where I’m seeing the huge difference.

Is it to be expected that a Cloud Function would run drastically slower in production than on a MacBook Pro?

Solution

When you deploy a firebase cloud function, you can customize the runtime environment. But here, there isn’t CPU capacity. If you have a look to Cloud Function documentation directly (Firebase Cloud Functions are backed by Google Cloud Functions) you can see the correlation between the quatity of memory and the CPU power.

Note: 4Gb of memory = 2CPU @2.4Ghz, not 1 CPU (one thread) @4.8Ghz

So now, deploy your Firebase Cloud Function with 2Gb of memory, the speed should be better (by default it’s 256Mb). But Keep in mind this: Cloud Function is mono CPU, and limited to 2.4Ghz. Your macbook pro should have 8 or 16 CPU, and with 3.4Ghz of processing power.

Answered By – guillaume blaquiere

Answer Checked By – Dawn Plyler (FlutterFixes Volunteer)

Leave a Reply

Your email address will not be published. Required fields are marked *