Thursday, September 29, 2016

WSO2 API Manager 2.0.0 New Throttling logic Execution Order

Here in this post i would like to discuss how throttling happens within throttle handler with newly added complex throttling for API Manager. This order is very important and we used this order to optimize run time execution. Here is the order of execution different kind of policies.

01. Blocking conditions
Blocking requests will be executed first as it's the least expensive check. All blocking conditions will be evaluated per node basis. Blocking conditions are just checks of certain conditions and we don't need to maintain counters across all gateway nodes.

02.Advanced Throttling
If request is not blocked request then we will move to API level throttling. Here we will do throttling for API level and resource level. Here always API level throttle key will be API name. That means we can control API requests per API.

03.Subscription Throttling with burst controlling
Next thing is subscription level API throttling. When you have API in store, subscribers will come there and subscribe to that API. Whenever subscription made we will make record saying user subscribed to this API using this application. So whenever API request come to API gateway we will get application id(which use to identify API uniquely) and API context + version(which we can use to identify API uniquely) to create key to do subscription level throttling. That means when subscription level throttling happens it will always count requests for API subscribed to given application.

04.Application Throttling
Application level throttling happens in application level and users can control total number of requests come to all APIs subscribed to given application. In this case counters will maintain against application user combination.

05.Custom Throttling Policies
Users are allowed to define dynamic rules according to specific use cases. This feature will be applied globally across all tenants. System administrative users should define these rules and it will be applied across all the users in the system. When you create a custom throttling policy you can define any policy you like. Users need to write a Siddhi query to address their use case. The specific combination of attributes we are checking in the policy have to be defined as the key (which is called the key template). Usually the key template will include a predefined format and a set of predefined parameters.

Please see below diagram(draw by Sam Baheerathan) to understand this flow clearly.

Screen Shot 2016-09-28 at 7.41.46 PM.png


How newly added Traffic Manager place in WSO2 API Manager distributed deployment

Here in this post i would like to add deployment diagrams for API Manager distributed deployment and how it changes after adding traffic manager to it. If you interested about complex traffic manager deployment patterns you can go through my previous blog posts. Here i will list only deployment diagrams.

Please see below distributed API Manager deployment deployment diagram.

Image result for api manager distributed deployment


Now here is how it looks after adding traffic manager instances to it.

Untitled drawing(2).jpg


How distributed deployment looks like after adding high availability for traffic manager instances.

traffic-manager-deployment-LBPublisher-failoverReciever.png

Monday, September 26, 2016

WSO2 API Manager - Custom Throttling Policies work?

Users are allowed to define dynamic rules according to specific use cases. This feature will be applied globally across all tenants. System administrative users should define these rules and it will be applied across all the users in the system. When you create a custom throttling policy you can define any policy you like. Users need to write a Siddhi query to address their use case. The specific combination of attributes we are checking in the policy have to be defined as the key (which is called the key template). Usually the key template will include a predefined format and a set of predefined parameters.
With the new throttling implementation using WSO2 Complex Event Processor as the global throttling engine, users will be able to create their own custom throttling policies by writing custom Siddhi queries. A key template can contain a combination of allowed keys separated by a colon ":" and each key should start with the "$" prefix. In WSO2 API Manager 2.0.0, users can use the following keys to create custom throttling policies.
  • apiContext,
  • apiVersion,
  • resourceKey,
  • userId,
  • appId,
  • apiTenant,
  • appTenant

Sample custom policy

FROM RequestStream
SELECT userId, ( userId == 'admin@carbon.super'  and apiKey == '/pizzashack/1.0.0:1.0.0') AS isEligible ,
str:concat('admin@carbon.super',':','/pizzashack/1.0.0:1.0.0') as throttleKey
INSERT INTO EligibilityStream;
FROM EligibilityStream[isEligible==true]#window.time(1 min)
SELECT throttleKey, (count(throttleKey) >= 5) as isThrottled group by throttleKey
INSERT ALL EVENTS into ResultStream;
As shown in the above Siddhi query, throttle key should match keytemplate format. If there is a mismatch between the Keytemplate format and throttlekey requests will not be throttled.

WSO2 API Manager - Subscription Throttling with burst controlling works?


Next thing is subscription level API throttling. When you have API in store, subscribers will come there and subscribe to that API. Whenever subscription made we will make record saying user subscribed to this API using this application. So whenever API request come to API gateway we will get application id(which use to identify API uniquely) and API context + version(which we can use to identify API uniquely) to create key to do subscription level throttling. That means when subscription level throttling happens it will always count requests for API subscribed to given application.

Upto API Manager 1.10 this subscription level throttling allowed per user basis. That means if multiple users use same subscription each of them can have copy of allowed quota and it will be unmanageable at some point as user base grows.

Also when you define advanced throttling policies you can also define burst control policy. This is very important because otherwise one user can consume all allocated requests within short period of time and rest of users cannot use API in fair way.

Screenshot from 2016-09-26 17-51-38.png


WSO2 API Manager - How Application Level Throttling Works?


Application level throttling happens in application level and users can control total number of requests come to all APIs subscribed to given application. In this case counters will maintain against application.

Screenshot from 2016-09-26 17-52-46.png

WSO2 API Manager - How advanced API and Resource level throttling works?

If request is not blocked request then we will move to API level throttling. Here we will do throttling for API level and resource level. Here always API level throttle key will be API name. That means we can control API requests per API.

Advanced API level policy applicable at 2 levels(this do not support from UI level at the moment, but runtime support this).
  1. Per user level - All API request counts happen against user(per user +api combination).
  2. Per API/Resource level -  Without considering user all counts maintain per API basis.

For the moment let's only consider per API count as it's supported OOB. First API level throttling will happen. Which means if you added some policy when you define API then it will applicable at API level.

Then you can also add throttling tiers at resource level when you create API. That means for a given resource you will be allowed certain quota. That means even if same resource accessed by different applications still it allows same amount of requests.

Screenshot from 2016-09-26 17-53-50.png

When you design complex policy you will be able to define policy based on multiple parameters such as transport headers, IP addresses, user agent or any other header based attribute. When we evaluate this kind of complex policy always API or resource ID will be picked as base key. Then it will create multiple keys based on the number of conditional groups have in your policy.

Screenshot from 2016-09-26 17-54-01.png

WSO2 API Manager new throttling - How Blocking condition work ?

Blocking requests will be executed first as it's the least expensive check. All blocking conditions will be evaluated per node basis. Blocking conditions are just checks of certain conditions and we don't need to maintain counters across all gateway nodes. For blocking conditions we will be evaluate requests against following attributes. All these blocking conditions can add and will be evaluated at tenant level. That means one tenant cannot block other requests etc.
apiContext - if users need to block all requests coming to given API then we may use this blocking condition. Here API content will be complete context of API URL.
appLevelBlockingKey - If users need to block all requests coming to some application then they can use this blocking condition. Here throttle key will be construct by combining subscriber name and application name.
authorizedUser - If we need to block requests coming from specific user then they can use this blocking condition. Blocking key will be authorized users present in message context.
ipLevelBlockingKey - IP level blocking can use when we need to block specific IP address accessing our system. Then this one also apply at tenant level and blocking key will be constructed using IP address of incoming message and tenant id.

Screenshot from 2016-09-26 17-56-30.png

Tuesday, September 6, 2016

Load balance data publishing to multiple receiver groups - WSO2 API Manager /Traffic Manager

In previous articles we discussed about traffic Manager and different deployment patterns. In this article we will further discuss about different traffic manager deployments we can for deployments across data center. Cross data center deployments must use publisher group concept as each event need to sent all data center if we need global count across DCs. 

In this scenario there are two group of servers that are referred to as Group A and Group B. You can send events to both the groups. You can also carry out load balancing for both sets as mentioned in load balancing between a set of servers. This scenario is a combination of load balancing between a set of servers and sending an event to several receivers.
An event is sent to both Group A and Group B. Within Group A, it will be sent either to Traffic Manager -01 or Traffic Manager -02. Similarly within Group B, it will be sent either to Traffic Manager -03 or Traffic Manager -04. In the setup, you can have any number of Groups and any number of Traffic Managers (within group) as required by mentioning them accurately in the server URL. For this scenario it's mandatory to publish events to each group but within group we can do it two different ways.

  1. Publishing to multiple receiver groups with load balancing within group
  2. Publishing to multiple receiver groups with failover within group

Now let's discuss both of these options in detail. This pattern is the recommended approach for multi data center deployments when we need to have unique counters across data centers. Each group will reside within data center and within datacenter 2 traffic manager nodes will be there to handle high availability scenarios.

Publishing to multiple receiver groups with load balancing within group

As you can see diagram below data publisher will push events to both groups. But since we do have multiple nodes within each group it will send event to only one node at a given time in round robin fashion. That means within group A first request goes to traffic manager 01 and next goes to traffic manager 02 and so. If traffic manager node 01 is unavailable then all traffic will go to traffic manager node 02 and it will address failover scenarios.
Copy of traffic-manager-deployment-LBPublisher-failoverReciever(5).png

Similar to the other scenarios, you can describe this as a receiver URL. The Groups should be mentioned within curly braces separated by commas. Furthermore, each receiver that belongs to the group, should be within the curly braces and with the receiver URLs in a comma separated format. The receiver URL format is given below.

          true
          Binary    {tcp://127.0.0.1:9612,tcp://127.0.0.1:9613},{tcp://127.0.0.2:9612,tcp://127.0.0.2:9613}
{ssl://127.0.0.1:9712,ssl://127.0.0.1:9713}, {ssl://127.0.0.2:9712,ssl://127.0.0.2:9713}
..............

Publishing to multiple receiver groups with failover within group


As you can see diagram below data publisher will push events to both groups. But since we do have multiple nodes within each group it will send event to only one node at a given time. Then if that node goes down then event publisher will send events to other node within same group. This model guarantees message publishing to each server group.  



Copy of traffic-manager-deployment-LBPublisher-failoverReciever(3).png
According to following configurations data publisher will send events to both group A and B. Then within group A it will go to either traffic manager 01 or traffic manager 02. If events go to traffic manager 01 then until it becomes unavailable events will go to that node. Once its unavailable events will go to traffic manager 02.


true
Binary{tcp://127.0.0.1:9612 | tcp://127.0.0.1:9613},{tcp://127.0.0.2:9612 | tcp://127.0.0.2:9613}
{ssl://127.0.0.1:9712,ssl://127.0.0.1:9713}, {ssl://127.0.0.2:9712,ssl://127.0.0.2:9713}
……………………..

Thursday, September 1, 2016

Failover throttle data receiver pattern for API Gateway.(WSO2 API Manager Traffic Manager)

In this pattern we connect gateway workers to two traffic managers. If one goes down then other can act as traffic manager for gateway. So we need to configure gateway to push throttle events to both traffic managers. Please see the diagram below to understand how this deployment works. As you can see gateway node will push events to both traffic manager node01 and node02. And also gateway receiver's throttle decision updates from both traffic managers using failover data receiver pattern.

Traffic Managers are fronted with load balancer as shown in the diagram. 

Then admin dashboard/publisher server will communicate with traffic manager through load balancer. When user creates new policy from admin dashboard that policy will be stored in database and publish to one traffic manager node through load balancer. Since we do have deployment synchronize mechanism for traffic managers one traffic manager can update other one with latest changes. So it's sufficient to publish throttle policy to one node in active/passive pattern(if one node is active keep sending requests to it and if it's not available send to other node). If we are planning to use svn based deployment synchronizer then created throttle policies should always publish to manager node(traffic manager instance) and workers need to synchronize with it.
traffic-manager-deployment-LBPublisher-failoverReciever.png



The idea behind failover data receiver endpoint is stopping single-point-of-failure in a system. As the broker deployed in each traffic manager and storing and forwarding the messages, if that server goes down entire message flowing of the system will go down no matter what other servers and functions are involved. Thus in order to make a robust messaging system it is mandatory to have a fail-over mechanism.

When we have few instances of Traffic Manager servers up and running in the system generally each of these server is having broker. If one broker goes down then gateway automatically switch to the other broker and continue throttle message receiving. If that one also fails it will try next and so on. Thus as a whole system will not have a downtime.

So in order to achieve high availability for data receiving side we need to configure JMSConnectionParameters to connect multiple borocker running within each traffic manager. So for that we need add following configuration to each gateway. If single gateway is communicating with multiple traffic manager this is the easiest way to configure gateway to communicate with multiple traffic managers.


             
TopicConnectionFactory
topic           
org.wso2.andes.jndi.PropertiesFileInitialContextFactory            
amqp://admin:admin@clientID/carbon?failover='roundrobin'%26cyclecount='2'%26brokerlist='tcp://127.0.0.1:5673?retries='5'%26connectdelay='50';tcp://127.0.0.1:5674?retries='5'%26connectdelay='50''