Face API - v1.0-preview

This API is currently available in:

  • Australia East - australiaeast.api.cognitive.microsoft.com
  • Brazil South - brazilsouth.api.cognitive.microsoft.com
  • Canada Central - canadacentral.api.cognitive.microsoft.com
  • Central India - centralindia.api.cognitive.microsoft.com
  • Central US - centralus.api.cognitive.microsoft.com
  • East Asia - eastasia.api.cognitive.microsoft.com
  • East US - eastus.api.cognitive.microsoft.com
  • East US 2 - eastus2.api.cognitive.microsoft.com
  • France Central - francecentral.api.cognitive.microsoft.com
  • Germany West Central - germanywestcentral.api.cognitive.microsoft.com
  • Italy North - italynorth.api.cognitive.microsoft.com
  • Japan East - japaneast.api.cognitive.microsoft.com
  • Japan West - japanwest.api.cognitive.microsoft.com
  • Jio India West - jioindiawest.api.cognitive.microsoft.com
  • Korea Central - koreacentral.api.cognitive.microsoft.com
  • North Central US - northcentralus.api.cognitive.microsoft.com
  • North Europe - northeurope.api.cognitive.microsoft.com
  • Norway East - norwayeast.api.cognitive.microsoft.com
  • Qatar Central - qatarcentral.api.cognitive.microsoft.com
  • South Africa North - southafricanorth.api.cognitive.microsoft.com
  • South Central US - southcentralus.api.cognitive.microsoft.com
  • Southeast Asia - southeastasia.api.cognitive.microsoft.com
  • Sweden Central - swedencentral.api.cognitive.microsoft.com
  • Switzerland North - switzerlandnorth.api.cognitive.microsoft.com
  • Switzerland West - switzerlandwest.api.cognitive.microsoft.com
  • UAE North - uaenorth.api.cognitive.microsoft.com
  • UK South - uksouth.api.cognitive.microsoft.com
  • West Central US - westcentralus.api.cognitive.microsoft.com
  • West Europe - westeurope.api.cognitive.microsoft.com
  • West US - westus.api.cognitive.microsoft.com
  • West US 2 - westus2.api.cognitive.microsoft.com
  • West US 3 - westus3.api.cognitive.microsoft.com

Face - Detect

To mitigate potential misuse that can subject people to stereotyping, discrimination, or unfair denial of services, we are retiring Face API attributes that predict emotion, gender, age, smile, facial hair, hair, and makeup. Read more about this decision here. We will also retire the Snapshot API, which allowed biometric data transfer from one Face subscription to another. Existing customers have until 30 June 2023 to use the emotion, gender, age, smile, facial hair, hair, and makeup attributes and the Snapshot API through Face API before they are retired.

Detect human faces in an image, return face rectangles, and optionally with faceIds, landmarks, and attributes.

  • No image will be stored. Only the extracted face feature(s) will be stored on server. The faceId is an identifier of the face feature and will be used in Face - Identify, Face - Verify, and Face - Find Similar. The stored face features will expire and be deleted at the time specified by faceIdTimeToLive after the original detection call.
  • Optional parameters include faceId, landmarks, and attributes. Attributes include headPose, glasses, occlusion, accessories, blur, exposure, noise, mask, and qualityForRecognition. Some of the results returned for specific attributes may not be highly accurate.
  • JPEG, PNG, GIF (the first frame), and BMP format are supported. The allowed image file size is from 1KB to 6MB.
  • The minimum detectable face size is 36x36 pixels in an image no larger than 1920x1080 pixels. Images with dimensions higher than 1920x1080 pixels will need a proportionally larger minimum face size.
  • Up to 100 faces can be returned for an image. Faces are ranked by face rectangle size from large to small.
  • For optimal results when querying Face - Identify, Face - Verify, and Face - Find Similar ('returnFaceId' is true), please use faces that are: frontal, clear, and with a minimum size of 200x200 pixels (100 pixels between eyes).
  • Different 'detectionModel' values can be provided. To use and compare different detection models, please refer to How to specify a detection model
    • 'detection_01': The default detection model for Face - Detect. Recommend for near frontal face detection. For scenarios with exceptionally large angle (head-pose) faces, occluded faces or wrong image orientation, the faces in such cases may not be detected.
    • 'detection_02': Detection model released in 2019 May with improved accuracy especially on small, side and blurry faces. Face attributes and landmarks are disabled if you choose this detection model.
    • 'detection_03': Detection model released in 2021 February with improved accuracy especially on small faces. Face attributes (mask and headPose only) and landmarks are supported if you choose this detection model.
  • Different 'recognitionModel' values are provided. If follow-up operations like Verify, Identify, Find Similar are needed, please specify the recognition model with 'recognitionModel' parameter. The default value for 'recognitionModel' is 'recognition_01', if latest model needed, please explicitly specify the model you need in this parameter. Once specified, the detected faceIds will be associated with the specified recognition model. More details, please refer to How to specify a recognition model.
    • 'recognition_01': The default recognition model for Face - Detect. All those faceIds created before 2019 March are bonded with this recognition model.
    • 'recognition_02': Recognition model released in 2019 March.
    • 'recognition_03': Recognition model released in 2020 May.
    • 'recognition_04': Recognition model released in 2021 February. 'recognition_04' is recommended since its accuracy is improved on faces wearing masks compared with 'recognition_03', and its overall accuracy is improved compared with 'recognition_01' and 'recognition_02'.

Http Method

POST

Select the testing console in the region where you created your resource:

Australia East Brazil South Canada Central Central India Central US East Asia East US East US 2 France Central Germany West Central Japan East Japan West Jio India West Korea Central North Central US North Europe Norway East Qatar Central South Africa North South Central US Southeast Asia Sweden Central Switzerland North Switzerland West UAE North UK South West Central US West Europe West US West US 2 West US 3

Request URL

Request parameters

(optional)
boolean

Return faceIds of the detected faces or not. The default value is true.

(optional)
boolean

Return face landmarks of the detected faces or not. The default value is false.

(optional)
string

Analyze and return the one or more specified face attributes in the comma-separated string like "returnFaceAttributes=headPose,glasses". Supported face attributes include headPose, glasses, occlusion, accessories, blur, exposure, noise, mask, and qualityForRecognition. Face attribute analysis has additional computational and time cost.

(optional)
string

The 'recognitionModel' associated with the detected faceIds. Supported 'recognitionModel' values include "recognition_01", "recognition_02", "recognition_03" or "recognition_04". The default value is "recognition_01". "recognition_04" is recommended since its accuracy is improved on faces wearing masks compared with "recognition_03", and its overall accuracy is improved compared with "recognition_01" and "recognition_02".

(optional)
boolean

Return 'recognitionModel' or not. The default value is false.

(optional)
string

The 'detectionModel' associated with the detected faceIds. Supported 'detectionModel' values include "detection_01", "detection_02" and "detection_03". The default value is "detection_01".

(optional)
integer

The number of seconds for the face ID being cached. Supported range from 60 seconds up to 86400 seconds. The default value is 86400 (24 hours).

Request headers

string
Media type of the body sent to the API.
string
Subscription key which provides access to this API. Found in your Cognitive Services accounts.

Request body

To detect in a URL (or binary data) specified image.

JSON fields in the request body:
FieldsTypeDescription
urlStringURL of input image.

{
    "url": "http://example.com/1.jpg"
}
[binary data]

Response 200

A successful call returns an array of face entries ranked by face rectangle size in descending order. An empty response indicates no faces detected. A face entry may contain the following values depending on input parameters:

Fields Type Description
faceId String Unique faceId of the detected face, created by detection API and it will expire 24 hours after the detection call. To return this, it requires "returnFaceId" parameter to be true.
recognitionModel String The 'recognitionModel' associated with this faceId. This is only returned when 'returnRecognitionModel' is explicitly set as true.
faceRectangle Object A rectangle area for the face location on image.
faceLandmarks Object An array of 27-point face landmarks pointing to the important positions of face components. To return this, it requires "returnFaceLandmarks" parameter to be true.
faceAttributes Object Face Attributes:
  • headPose: 3-D roll/yaw/pitch angles for face direction.
  • glasses: glasses type. Values include 'NoGlasses', 'ReadingGlasses', 'Sunglasses', 'SwimmingGoggles'.
  • accessories: accessories around face, including 'headwear', 'glasses' and 'mask'. Empty array means no accessories detected. Note this is after a face is detected. Large mask could result in no face to be detected.
  • blur: face is blurry or not. Level returns 'Low', 'Medium' or 'High'. Value returns a number between [0,1], the larger the blurrier.
  • exposure: face exposure level. Level returns 'GoodExposure', 'OverExposure' or 'UnderExposure'.
  • noise: noise level of face pixels. Level returns 'Low', 'Medium' and 'High'. Value returns a number between [0,1], the larger the noisier
  • occlusion: whether each facial area is occluded, including forehead, eyes and mouth.
  • mask: whether each face is wearing a mask. Mask type returns 'noMask', 'faceMask', 'otherMaskOrOcclusion', or 'uncertain'. Value returns a boolean 'noseAndMouthCovered' indicating whether nose and mouth are covered.
  • qualityForRecognition: the overall image quality regarding whether the image being used in the detection is of sufficient quality to attempt face recognition on. The value is an informal rating of low, medium, or high. Only "high" quality images are recommended for person enrollment and quality at or above "medium" is recommended for identification scenarios. The attribute is only available when using any combinations of detection models detection_01 or detection_03, and recognition models recognition_03 or recognition_04.

[
    {
        "faceId": "c5c24a82-6845-4031-9d5d-978df9175426",
        "recognitionModel": "recognition_02",
        "faceRectangle": {
            "width": 78,
            "height": 78,
            "left": 394,
            "top": 54
        },
        "faceLandmarks": {
            "pupilLeft": {
                "x": 412.7,
                "y": 78.4
            },
            "pupilRight": {
                "x": 446.8,
                "y": 74.2
            },
            "noseTip": {
                "x": 437.7,
                "y": 92.4
            },
            "mouthLeft": {
                "x": 417.8,
                "y": 114.4
            },
            "mouthRight": {
                "x": 451.3,
                "y": 109.3
            },
            "eyebrowLeftOuter": {
                "x": 397.9,
                "y": 78.5
            },
            "eyebrowLeftInner": {
                "x": 425.4,
                "y": 70.5
            },
            "eyeLeftOuter": {
                "x": 406.7,
                "y": 80.6
            },
            "eyeLeftTop": {
                "x": 412.2,
                "y": 76.2
            },
            "eyeLeftBottom": {
                "x": 413.0,
                "y": 80.1
            },
            "eyeLeftInner": {
                "x": 418.9,
                "y": 78.0
            },
            "eyebrowRightInner": {
                "x": 4.8,
                "y": 69.7
            },
            "eyebrowRightOuter": {
                "x": 5.5,
                "y": 68.5
            },
            "eyeRightInner": {
                "x": 441.5,
                "y": 75.0
            },
            "eyeRightTop": {
                "x": 446.4,
                "y": 71.7
            },
            "eyeRightBottom": {
                "x": 447.0,
                "y": 75.3
            },
            "eyeRightOuter": {
                "x": 451.7,
                "y": 73.4
            },
            "noseRootLeft": {
                "x": 428.0,
                "y": 77.1
            },
            "noseRootRight": {
                "x": 435.8,
                "y": 75.6
            },
            "noseLeftAlarTop": {
                "x": 428.3,
                "y": 89.7
            },
            "noseRightAlarTop": {
                "x": 442.2,
                "y": 87.0
            },
            "noseLeftAlarOutTip": {
                "x": 424.3,
                "y": 96.4
            },
            "noseRightAlarOutTip": {
                "x": 446.6,
                "y": 92.5
            },
            "upperLipTop": {
                "x": 437.6,
                "y": 105.9
            },
            "upperLipBottom": {
                "x": 437.6,
                "y": 108.2
            },
            "underLipTop": {
                "x": 436.8,
                "y": 111.4
            },
            "underLipBottom": {
                "x": 437.3,
                "y": 114.5
            }
        },
        "faceAttributes": {
            "glasses": "sunglasses",
            "headPose": {
                "roll": 2.1,
                "yaw": 3,
                "pitch": 1.6
            },
            "occlusion": {
                "foreheadOccluded": false,
                "eyeOccluded": false,
                "mouthOccluded": false
            },
            "accessories": [
                {"type": "headWear", "confidence": 0.99},
                {"type": "glasses", "confidence": 1.0},
                {"type": "mask"," confidence": 0.87}
            ],
            "blur": {
                "blurLevel": "Medium",
                "value": 0.51
            },
            "exposure": {
                "exposureLevel": "GoodExposure",
                "value": 0.55
            },
            "noise": {
                "noiseLevel": "Low",
                "value": 0.12
            },
            "qualityForRecognition": "high"
        }
    }
]

Response 400

Error code and message returned in JSON:

Error CodeError Message Description
BadArgumentJSON parsing error. Bad or unrecognizable request JSON body.
BadArgumentInvalid argument returnFaceAttributes. Supported values are: headPose, glasses, occlusion, accessories, blur, exposure, noise and mask in a comma-separated format.
BadArgument'recognitionModel' is invalid.
BadArgument'detectionModel' is invalid.
BadArgument'returnFaceAttributes' is not supported by detection_02.
BadArgument'returnLandmarks' is not supported by detection_02.
InvalidURLInvalid image format or URL. Supported formats include JPEG, PNG, GIF(the first frame) and BMP.
InvalidURLFailed to download image from the specified URL. Remote server error returned.
InvalidImageDecoding error, image format unsupported.
InvalidImageSizeImage size is too small. The valid image file size should be larger than or equal to 1KB.
InvalidImageSizeImage size is too big. The valid image file size should be no larger than 6MB.

{
    "error": {
        "code": "BadArgument",
        "message": "Request body is invalid."
    }
}

Response 401

Error code and message returned in JSON:

Error CodeError Message Description
UnspecifiedInvalid subscription Key or user/plan is blocked.

{
    "error": {
        "code": "Unspecified",
        "message": "Access denied due to invalid subscription key. Make sure you are subscribed to an API you are trying to call and provide the right key."
    }
}

Response 403

{
    "error": {
        "statusCode": 403,
        "message": "Out of call volume quota. Quota will be replenished in 2 days."
    }
}

Response 408

Operation exceeds maximum execution time.

{
    "error": {
        "code": "OperationTimeOut",
        "message": "Request Timeout."
    }
}

Response 415

Unsupported media type error. Content-Type is not in the allowed types:

  1. For an image URL, Content-Type should be application/json
  2. For a local image, Content-Type should be application/octet-stream

{
    "error": {
        "code": "BadArgument",
        "message": "Invalid Media Type."
    }
}

Response 429

{
    "error": {
       "statusCode": 429,
        "message": "Rate limit is exceeded. Try again in 26 seconds."
    }
}

Code samples

@ECHO OFF

curl -v -X POST "https://switzerlandwest.api.cognitive.microsoft.com/face/v1.0-preview/detect?returnFaceId=true&returnFaceLandmarks=false&returnFaceAttributes={string}&recognitionModel=recognition_03&returnRecognitionModel=false&detectionModel=detection_03&faceIdTimeToLive=86400"
-H "Content-Type: application/json"
-H "Ocp-Apim-Subscription-Key: {subscription key}"

--data-ascii "{body}" 
using System;
using System.Net.Http.Headers;
using System.Text;
using System.Net.Http;
using System.Web;

namespace CSHttpClientSample
{
    static class Program
    {
        static void Main()
        {
            MakeRequest();
            Console.WriteLine("Hit ENTER to exit...");
            Console.ReadLine();
        }
        
        static async void MakeRequest()
        {
            var client = new HttpClient();
            var queryString = HttpUtility.ParseQueryString(string.Empty);

            // Request headers
            client.DefaultRequestHeaders.Add("Ocp-Apim-Subscription-Key", "{subscription key}");

            // Request parameters
            queryString["returnFaceId"] = "true";
            queryString["returnFaceLandmarks"] = "false";
            queryString["returnFaceAttributes"] = "{string}";
            queryString["recognitionModel"] = "recognition_03";
            queryString["returnRecognitionModel"] = "false";
            queryString["detectionModel"] = "detection_03";
            queryString["faceIdTimeToLive"] = "86400";
            var uri = "https://switzerlandwest.api.cognitive.microsoft.com/face/v1.0-preview/detect?" + queryString;

            HttpResponseMessage response;

            // Request body
            byte[] byteData = Encoding.UTF8.GetBytes("{body}");

            using (var content = new ByteArrayContent(byteData))
            {
               content.Headers.ContentType = new MediaTypeHeaderValue("< your content type, i.e. application/json >");
               response = await client.PostAsync(uri, content);
            }

        }
    }
}	
// // This sample uses the Apache HTTP client from HTTP Components (http://hc.apache.org/httpcomponents-client-ga/)
import java.net.URI;
import org.apache.http.HttpEntity;
import org.apache.http.HttpResponse;
import org.apache.http.client.HttpClient;
import org.apache.http.client.methods.HttpGet;
import org.apache.http.client.utils.URIBuilder;
import org.apache.http.impl.client.HttpClients;
import org.apache.http.util.EntityUtils;

public class JavaSample 
{
    public static void main(String[] args) 
    {
        HttpClient httpclient = HttpClients.createDefault();

        try
        {
            URIBuilder builder = new URIBuilder("https://switzerlandwest.api.cognitive.microsoft.com/face/v1.0-preview/detect");

            builder.setParameter("returnFaceId", "true");
            builder.setParameter("returnFaceLandmarks", "false");
            builder.setParameter("returnFaceAttributes", "{string}");
            builder.setParameter("recognitionModel", "recognition_03");
            builder.setParameter("returnRecognitionModel", "false");
            builder.setParameter("detectionModel", "detection_03");
            builder.setParameter("faceIdTimeToLive", "86400");

            URI uri = builder.build();
            HttpPost request = new HttpPost(uri);
            request.setHeader("Content-Type", "application/json");
            request.setHeader("Ocp-Apim-Subscription-Key", "{subscription key}");


            // Request body
            StringEntity reqEntity = new StringEntity("{body}");
            request.setEntity(reqEntity);

            HttpResponse response = httpclient.execute(request);
            HttpEntity entity = response.getEntity();

            if (entity != null) 
            {
                System.out.println(EntityUtils.toString(entity));
            }
        }
        catch (Exception e)
        {
            System.out.println(e.getMessage());
        }
    }
}

<!DOCTYPE html>
<html>
<head>
    <title>JSSample</title>
    <script src="http://ajax.googleapis.com/ajax/libs/jquery/1.9.0/jquery.min.js"></script>
</head>
<body>

<script type="text/javascript">
    $(function() {
        var params = {
            // Request parameters
            "returnFaceId": "true",
            "returnFaceLandmarks": "false",
            "returnFaceAttributes": "{string}",
            "recognitionModel": "recognition_03",
            "returnRecognitionModel": "false",
            "detectionModel": "detection_03",
            "faceIdTimeToLive": "86400",
        };
      
        $.ajax({
            url: "https://switzerlandwest.api.cognitive.microsoft.com/face/v1.0-preview/detect?" + $.param(params),
            beforeSend: function(xhrObj){
                // Request headers
                xhrObj.setRequestHeader("Content-Type","application/json");
                xhrObj.setRequestHeader("Ocp-Apim-Subscription-Key","{subscription key}");
            },
            type: "POST",
            // Request body
            data: "{body}",
        })
        .done(function(data) {
            alert("success");
        })
        .fail(function() {
            alert("error");
        });
    });
</script>
</body>
</html>
#import <Foundation/Foundation.h>

int main(int argc, const char * argv[])
{
    NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init];
    
    NSString* path = @"https://switzerlandwest.api.cognitive.microsoft.com/face/v1.0-preview/detect";
    NSArray* array = @[
                         // Request parameters
                         @"entities=true",
                         @"returnFaceId=true",
                         @"returnFaceLandmarks=false",
                         @"returnFaceAttributes={string}",
                         @"recognitionModel=recognition_03",
                         @"returnRecognitionModel=false",
                         @"detectionModel=detection_03",
                         @"faceIdTimeToLive=86400",
                      ];
    
    NSString* string = [array componentsJoinedByString:@"&"];
    path = [path stringByAppendingFormat:@"?%@", string];

    NSLog(@"%@", path);

    NSMutableURLRequest* _request = [NSMutableURLRequest requestWithURL:[NSURL URLWithString:path]];
    [_request setHTTPMethod:@"POST"];
    // Request headers
    [_request setValue:@"application/json" forHTTPHeaderField:@"Content-Type"];
    [_request setValue:@"{subscription key}" forHTTPHeaderField:@"Ocp-Apim-Subscription-Key"];
    // Request body
    [_request setHTTPBody:[@"{body}" dataUsingEncoding:NSUTF8StringEncoding]];
    
    NSURLResponse *response = nil;
    NSError *error = nil;
    NSData* _connectionData = [NSURLConnection sendSynchronousRequest:_request returningResponse:&response error:&error];

    if (nil != error)
    {
        NSLog(@"Error: %@", error);
    }
    else
    {
        NSError* error = nil;
        NSMutableDictionary* json = nil;
        NSString* dataString = [[NSString alloc] initWithData:_connectionData encoding:NSUTF8StringEncoding];
        NSLog(@"%@", dataString);
        
        if (nil != _connectionData)
        {
            json = [NSJSONSerialization JSONObjectWithData:_connectionData options:NSJSONReadingMutableContainers error:&error];
        }
        
        if (error || !json)
        {
            NSLog(@"Could not parse loaded json with error:%@", error);
        }
        
        NSLog(@"%@", json);
        _connectionData = nil;
    }
    
    [pool drain];

    return 0;
}
<?php
// This sample uses the Apache HTTP client from HTTP Components (http://hc.apache.org/httpcomponents-client-ga/)
require_once 'HTTP/Request2.php';

$request = new Http_Request2('https://switzerlandwest.api.cognitive.microsoft.com/face/v1.0-preview/detect');
$url = $request->getUrl();

$headers = array(
    // Request headers
    'Content-Type' => 'application/json',
    'Ocp-Apim-Subscription-Key' => '{subscription key}',
);

$request->setHeader($headers);

$parameters = array(
    // Request parameters
    'returnFaceId' => 'true',
    'returnFaceLandmarks' => 'false',
    'returnFaceAttributes' => '{string}',
    'recognitionModel' => 'recognition_03',
    'returnRecognitionModel' => 'false',
    'detectionModel' => 'detection_03',
    'faceIdTimeToLive' => '86400',
);

$url->setQueryVariables($parameters);

$request->setMethod(HTTP_Request2::METHOD_POST);

// Request body
$request->setBody("{body}");

try
{
    $response = $request->send();
    echo $response->getBody();
}
catch (HttpException $ex)
{
    echo $ex;
}

?>
########### Python 2.7 #############
import httplib, urllib, base64

headers = {
    # Request headers
    'Content-Type': 'application/json',
    'Ocp-Apim-Subscription-Key': '{subscription key}',
}

params = urllib.urlencode({
    # Request parameters
    'returnFaceId': 'true',
    'returnFaceLandmarks': 'false',
    'returnFaceAttributes': '{string}',
    'recognitionModel': 'recognition_03',
    'returnRecognitionModel': 'false',
    'detectionModel': 'detection_03',
    'faceIdTimeToLive': '86400',
})

try:
    conn = httplib.HTTPSConnection('switzerlandwest.api.cognitive.microsoft.com')
    conn.request("POST", "/face/v1.0-preview/detect?%s" % params, "{body}", headers)
    response = conn.getresponse()
    data = response.read()
    print(data)
    conn.close()
except Exception as e:
    print("[Errno {0}] {1}".format(e.errno, e.strerror))

####################################

########### Python 3.2 #############
import http.client, urllib.request, urllib.parse, urllib.error, base64

headers = {
    # Request headers
    'Content-Type': 'application/json',
    'Ocp-Apim-Subscription-Key': '{subscription key}',
}

params = urllib.parse.urlencode({
    # Request parameters
    'returnFaceId': 'true',
    'returnFaceLandmarks': 'false',
    'returnFaceAttributes': '{string}',
    'recognitionModel': 'recognition_03',
    'returnRecognitionModel': 'false',
    'detectionModel': 'detection_03',
    'faceIdTimeToLive': '86400',
})

try:
    conn = http.client.HTTPSConnection('switzerlandwest.api.cognitive.microsoft.com')
    conn.request("POST", "/face/v1.0-preview/detect?%s" % params, "{body}", headers)
    response = conn.getresponse()
    data = response.read()
    print(data)
    conn.close()
except Exception as e:
    print("[Errno {0}] {1}".format(e.errno, e.strerror))

####################################
require 'net/http'

uri = URI('https://switzerlandwest.api.cognitive.microsoft.com/face/v1.0-preview/detect')

query = URI.encode_www_form({
    # Request parameters
    'returnFaceId' => 'true',
    'returnFaceLandmarks' => 'false',
    'returnFaceAttributes' => '{string}',
    'recognitionModel' => 'recognition_03',
    'returnRecognitionModel' => 'false',
    'detectionModel' => 'detection_03',
    'faceIdTimeToLive' => '86400'
})
if query.length > 0
  if uri.query && uri.query.length > 0
    uri.query += '&' + query
  else
    uri.query = query
  end
end

request = Net::HTTP::Post.new(uri.request_uri)
# Request headers
request['Content-Type'] = 'application/json'
# Request headers
request['Ocp-Apim-Subscription-Key'] = '{subscription key}'
# Request body
request.body = "{body}"

response = Net::HTTP.start(uri.host, uri.port, :use_ssl => uri.scheme == 'https') do |http|
    http.request(request)
end

puts response.body