How to use API Manager Application workflow to automate token generation process

Workflow extensions allow you to attach a custom workflow to various operations in the API Manager such as user signup, application creation, registration, subscription etc. By default, the API Manager workflows have Simple Workflow Executor engaged in them.

The Simple Workflow Executor carries out an operation without any intervention by a workflow admin. For example, when the user creates an application, the Simple Workflow Executor allows the application to be created without the need for an admin to approve the creation process.

Sometimes we may need to do additional operations as part of work flow.
In this example we will discuss how we can generate access tokens once you finished application creation. By default you need to generate keys once you created Application in API store. With this sample that process would automate and generate access tokens for you application automatically.

You can find more information about work flows in this document.
https://docs.wso2.com/display/AM170/Adding+Workflow+Extensions


Lets first see how we can intercept workflow complete process and do something.

ApplicationCreationSimpleWorkflowExecutor.complete() method will execute after we resume workflow from BPS.
Then we can write our implementation for workflow executor and do whatever we need there.
We will have user name, application id, tenant domain and other required parameters need to trigger subscription/key generation.
If need we can directly call dao or APIConsumerImpl to generate token(call getApplicationAccessKey).
In this case we may generate tokens from workflow executor.

Here you will see code for ApplicationCreation. This class is just the same as ApplicationCreationSimpleWorkflowExecutor, but additionally generating the keys in the ApplicationCreationExecutor.complete().
In this way the token will be generated as and when the application is created.

If need user can use OAuthAdminService getOAuthApplicationDataByAppName in the BPS side using soap call to get these details. If you want to send mail with generate tokens then you can do that as well.





package test.com.apim.workflow;

import java.util.List;
import java.util.Map;

import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.wso2.carbon.apimgt.api.APIManagementException;
import org.wso2.carbon.apimgt.impl.APIConstants;
import org.wso2.carbon.apimgt.impl.dao.ApiMgtDAO;
import org.wso2.carbon.apimgt.impl.dto.ApplicationWorkflowDTO;
import org.wso2.carbon.apimgt.impl.dto.WorkflowDTO;
import org.wso2.carbon.apimgt.impl.workflow.WorkflowException;
import org.wso2.carbon.apimgt.impl.workflow.WorkflowExecutor;
import org.wso2.carbon.apimgt.impl.workflow.WorkflowConstants;
import org.wso2.carbon.apimgt.impl.workflow.WorkflowStatus;

import org.wso2.carbon.apimgt.impl.dto.ApplicationRegistrationWorkflowDTO;
import org.wso2.carbon.apimgt.impl.APIManagerFactory;
import org.wso2.carbon.apimgt.api.APIConsumer;

public class ApplicationCreationExecutor extends WorkflowExecutor {

    private static final Log log =
            LogFactory.getLog(ApplicationCreationExecutor.class);

    private String userName;
    private String appName;

    @Override
    public String getWorkflowType() {
        return WorkflowConstants.WF_TYPE_AM_APPLICATION_CREATION;
    }

    /**
     * Execute the workflow executor
     *
     * @param workFlowDTO
     *            - {@link ApplicationWorkflowDTO}
     * @throws WorkflowException
     */

    public void execute(WorkflowDTO workFlowDTO) throws WorkflowException {
        if (log.isDebugEnabled()) {
            log.info("Executing Application creation Workflow..");
        }
        workFlowDTO.setStatus(WorkflowStatus.APPROVED);
        complete(workFlowDTO);

    }

    /**
     * Complete the external process status
     * Based on the workflow status we will update the status column of the
     * Application table
     *
     * @param workFlowDTO - WorkflowDTO
     */
    public void complete(WorkflowDTO workFlowDTO) throws WorkflowException {
        if (log.isDebugEnabled()) {
            log.info("Complete  Application creation Workflow..");
        }

        String status = null;
        if ("CREATED".equals(workFlowDTO.getStatus().toString())) {
            status = APIConstants.ApplicationStatus.APPLICATION_CREATED;
        } else if ("REJECTED".equals(workFlowDTO.getStatus().toString())) {
            status = APIConstants.ApplicationStatus.APPLICATION_REJECTED;
        } else if ("APPROVED".equals(workFlowDTO.getStatus().toString())) {
            status = APIConstants.ApplicationStatus.APPLICATION_APPROVED;
        }

        ApiMgtDAO dao = new ApiMgtDAO();

        try {
            dao.updateApplicationStatus(Integer.parseInt(workFlowDTO.getWorkflowReference()),status);
        } catch (APIManagementException e) {
            String msg = "Error occured when updating the status of the Application creation process";
            log.error(msg, e);
            throw new WorkflowException(msg, e);
        }

        ApplicationWorkflowDTO appDTO = (ApplicationWorkflowDTO) workFlowDTO;
        userName = appDTO.getUserName();

        appName = appDTO.getApplication().getName();

        System.out.println("UseName : " + userName + "   --- appName = " + appName) ;

        Map mapConsumerKeySecret = null ;

        try {
            APIConsumer apiConsumer = APIManagerFactory.getInstance().getAPIConsumer(userName);
            String[] appliedDomains = {""};
//Key generation
            mapConsumerKeySecret = apiConsumer.requestApprovalForApplicationRegistration(userName, appName, "PRODUCTION", "", appliedDomains, "3600");
        } catch(APIManagementException e) {
            throw new WorkflowException(
                    "An error occurred while generating token.", e);
        }

        for (Map.Entry entry : mapConsumerKeySecret.entrySet()) {
            String key = entry.getKey();
            String value = entry.getValue();

            System.out.println("Key : " + key + "   ---  value = " + value);
        }
    }

    @Override
    public List getWorkflowDetails(String workflowStatus) throws WorkflowException {
        return null;
    }

}


Then add this as application creation work flow.

How to write custom handler for API Manager

In this(https://docs.wso2.com/display/AM180/Writing+Custom+Handlers) post we have explained how we can add handler to API Manager.

Here in this post i will add sample code required for handler. You can import this to you favorite IDE and start implementing your logic.

Please find sample code here[https://drive.google.com/file/d/0B3OmQJfm2Ft8YlRjYV96VVcxaVk/view?usp=sharing]

Dummy class would be like this. You can implement your logic there


package org.wso2.carbon.test.gateway;

import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.apache.synapse.Mediator;
import org.apache.synapse.MessageContext;
import org.apache.synapse.core.axis2.Axis2MessageContext;
import org.apache.synapse.rest.AbstractHandler;
import org.apache.synapse.rest.RESTConstants;
import org.wso2.carbon.apimgt.impl.APIConstants;
import org.wso2.carbon.context.PrivilegedCarbonContext;
import org.wso2.carbon.utils.multitenancy.MultitenantConstants;
import java.util.Iterator;
import java.util.Map;
import java.util.TreeMap;

public class TestHandler extends AbstractHandler {

    private static final String EXT_SEQUENCE_PREFIX = "WSO2AM--Ext--";
    private static final String DIRECTION_OUT = "Out";
    private static final Log log = LogFactory.getLog(TestHandler.class);

    public boolean mediate(MessageContext messageContext, String direction) {
        log.info("===============================================================================");
        return true;
    }


    public boolean handleRequest(MessageContext messageContext) {
        log.info("===============================================================================");
        return true;
    }

    public boolean handleResponse(MessageContext messageContext) {
        return mediate(messageContext, DIRECTION_OUT);
    }
}

How to fine tune API Manager 1.8.0 to get maximum TPS and minimum response time


Here in this post i will discuss about API Manager 1.8.0 performance tuning. I have tested this is below mentioned deployment. Please note that this results can vary depend on you hardware server load and network. And this is not even fully optimized environment and probably you may go beyond this with better hardware+ network and configuration combination according to your use case.

Server specifications


System Information
Manufacturer: Fedora Project
Product Name: OpenStack Nova
Version: 2014.2.1-1.el7.centos

4 X CPU cores
Processor Information
Socket Designation: CPU 1
Type: Central Processor
Family: Other
Manufacturer: Bochs
Max Speed: 2000 MHz
Current Speed: 2000 MHz
Status: Populated, Enabled

Memory Device
Total Width: 64 bits
Data Width: 64 bits
Size: 8192 MB
Form Factor: DIMM
Type: RAM



Deployment Details


Deployment 01.
2 gateways (each run on dedicated machine)
2 key managers(each run on dedicated machine)
MySQL database server
1 dedicated machine to run Jmeter

Deployment 02.
1 gateways
1 key managers
MySQL database server
1 dedicated machine to run Jmeter  

 

Configuration changes.

Gateway changes.
Enable WS key validation for key management.
Edit /home/sanjeewa/work/wso2am-1.8.0/repository/conf/api-manager.xml with following configurations.
[Default value is ThriftClient]
<KeyValidatorClientType>WSClient</KeyValidatorClientType> 

[Default value is true]
<EnableThriftServer>false</EnableThriftServer 
Other than this all configurations will be default configuration.
However please note that each gateway should configure to communicate key manager.

Key Manager changes.
Edit /home/sanjeewa/work/wso2am-1.8.0/repository/conf/api-manager.xml with following configurations.
[Default value is true]
<EnableThriftServer>false</EnableThriftServer> 
No need to run thrift server there as we use WS client to key validation calls.
Both gateway and key managers configured with mysql servers. For this i configured usr manager, api manager and registry database with mysql servers.

Tuning parameters applied.

Gateway nodes.

01. Change synapse configurationsAdd following entries to /home/sanjeewa/work/wso2am-1.8.0/repository/conf/synapse.properties file.
synapse.threads.core=100
synapse.threads.max=250
synapse.threads.keepalive=5
synapse.threads.qlen=1000



02. Disable HTTP access logs
Since we are testing gateway functionality here we should not much worry about http access logs. However we may need to enable this to track access. But for this deployment we assume key managers are running in DMZ and no need track http access. For gateways most of the time this does not require as we do not expose servlet ports to outside(normally we only open 8243 and 8280).
Add following entry to /home/sanjeewa/work/wso2am-1.8.0/repository/conf/log4j.properties file.

log4j.logger.org.apache.synapse.transport.http.access=OFF

Remove following entry from /wso2am-1.8.0/repository/conf/tomcat/catalina-server.xml to disable http access logs.


          <Valve className="org.apache.catalina.valves.AccessLogValve" directory="${carbon.home}/repository/logs"
               prefix="http_access_" suffix=".log"
               pattern="combined" />





03. Tune parameters in axis2client.xml file. We will be using axis2 client to communicate from gateway to key manager for key validation. For this edit wso2am-1.8.0/repository/conf/axis2/axis2_client.xml and update following entries.

    <parameter name="defaultMaxConnPerHost">1000</parameter>
    <parameter name="maxTotalConnections">30000</parameter>



Key manager nodes.

01. Disable HTTP access logs
Since we are testing gateway functionality here we should not much worry about http access logs. However we may need to enable this to track access. But for this deployment we assume key managers are running in DMZ and no need track http access.

Add following entry to /home/sanjeewa/work/wso2am-1.8.0/repository/conf/log4j.properties file.

log4j.logger.org.apache.synapse.transport.http.access=OFF


02. Change DBCP connection parameters / Datasource configurations.
There can be argument on these parameters. Specially disable validation query. But when we have high concurrency and well performing database servers we may disable this as created connections are heavily used. And on the other hand connection may work when we validate it but when we really use it connection may not work. So as per my understanding there is no issue with disabling in high concurrency scenario.

Also i added following additional parameters to optimize database connection pool.
<maxWait>60000</maxWait>
<initialSize>20</initialSize>
<maxActive>150</maxActive>
<maxIdle>60</maxIdle>
<minIdle>40</minIdle>
<testOnBorrow>false</testOnBorrow>

If you dont want to disable validation query you may use following configuration(here i increased validation interval to avoid frequent query validation).

<testOnBorrow>true</testOnBorrow>
<validationQuery>SELECT 1</validationQuery>
<validationInterval>120000</validationInterval>



03. Tuning Tomcat parameters in key manager node.
This is important because we call key validation web service service from gateway.
change following properties in this(/home/sanjeewa/work/wso2am-1.8.0/repository/conf/tomcat/catalina-server.xml) file.


Here is the brief description about changed parameters. Also i added description for each field copied from this(http://tomcat.apache.org/tomcat-7.0-doc/config/http.html) document for your reference.

I updated acceptorThreadCount to 4(default it was 2) because in my machine i have 4 cores.
However after adding this change i noticed considerable reduction of CPU usage of each core.

Increased maxThreads to 750(default value was 250)
The maximum number of request processing threads to be created by this Connector, which therefore determines the maximum number of simultaneous requests that can be handled. If not specified, this attribute is set to 200. If an executor is associated with this connector, this attribute is ignored as the connector will execute tasks using the executor rather than an internal thread pool.

Increased minSpareThreads to 250 (default value was 50)
The minimum number of threads always kept running. If not specified, the default of 10 is used.

Increased maxKeepAliveRequests to 400 (default value was 200)
The maximum number of HTTP requests which can be pipelined until the connection is closed by the server. Setting this attribute to 1 will disable HTTP/1.0 keep-alive, as well as HTTP/1.1 keep-alive and pipelining. Setting this to -1 will allow an unlimited amount of pipelined or keep-alive HTTP requests. If not specified, this attribute is set to 100.

Increased acceptCount to 400 (default value was 200)
The maximum queue length for incoming connection requests when all possible request processing threads are in use. Any requests received when the queue is full will be refused. The default value is 100.

compression="off"
Disabled compression. However this might not effective as we do not use compressed data format.



<Connector  protocol="org.apache.coyote.http11.Http11NioProtocol"
               port="9443"
               bindOnInit="false"
               sslProtocol="TLS"
               maxHttpHeaderSize="8192"
               acceptorThreadCount="2"
               maxThreads="750"
               minSpareThreads="250"
               disableUploadTimeout="false"
               enableLookups="false"
               connectionUploadTimeout="120000"
               maxKeepAliveRequests="400"
               acceptCount="400"
               server="WSO2 Carbon Server"
               clientAuth="false"
               compression="off"
               scheme="https"
               secure="true"
               SSLEnabled="true"
               compressionMinSize="2048"
               noCompressionUserAgents="gozilla, traviata"                
compressableMimeType="text/html,text/javascript,application/x-javascript,application/javascript,application/xml,text
/css,application/xslt+xml,text/xsl,image/gif,image/jpg,image/jpeg"
               URIEncoding="UTF-8"/>





Testing

Test 01 - Clustered gateway/key manager test(2 nodes)

For this test we used 10000 tokens and 150 concurrency per single gateway server. Test carried out 20 minutes to avoid caching effect on performance figures.


-->
Round 01 with default configuration parameters
sampler_labelreq_countaveragemedian90%_lineminmaxerror%ratebandwidth
GW1310498650235014967602587.4236482276.629596
GW2315110949205302301302625.8847782310.470884
TOTAL625609550215104967605213.212664587.016218










Round 02 with http access logs disabled on key manager/ DB tuned/ with test on barrow/ synapse tuned
sampler_labelreq_countaveragemedian90%_lineminmaxerror%ratebandwidth
GW151257042220351994104271.9077093758.77817
GW253617252118330169104468.0334233931.345814
TOTAL104874292219340994108739.0580847689.347005










Round 03 with http access logs disabled/ DB tuned/ with test on barrow/ synapse tuned
sampler_labelreq_countaveragemedian90%_lineminmaxerror%ratebandwidth
GW1551192222193511425704592.3154284040.699415
GW2580467420173401190904836.3819664255.449367
TOTAL1131659621183401425709428.4774018295.955213










Round 04 with http access logs disabled/ DB tuned and removed test on barrow/ synapse tuned
sampler_labelreq_countaveragemedian90%_lineminmaxerror%ratebandwidth
GW1563604923203511496104697.2790024133.05506
GW2585450522183401033204879.4698224293.361631
TOTAL1149055422193501496109576.6028798426.288275
 


tps.png

  Results table

dsdsadsad.png

Response Time graph

Screenshot from 2015-03-18 13:07:09.png

Transactions per second graph


For 150 concurrency cluster(2 nodes) node
Average TPS - 4687.85
Average response time - 21ms
Average delay added by gateway - 16ms

Test 02 - Single gateway/key manager test


For this test we used 10000 tokens and 300 concurrency per single gateway server. Test carried out 20 minutes to avoid caching effect on performance figures.







Results table.

























Response Time Graph


For 300 concurrency 1 node
Average TPS -4956
Average response time - 55ms
Average delay added by gateway - 49ms

How to cleanup old and unused tokens in WSO2 API Manager

When we use WSO2 API Manager over few months we may have lot of expired, revoked and inactive tokens in IDN_OAUTH2_ACCESS_TOKEN table.
As of now we do not clear these entries for logging and audit purposes.
But with the time when table grow we may need to clear table.
Having large number of entries will slow down token generation and validation process.
So in this post we will discuss about clearing unused tokens in API Manager.

Most important thing is we should not try this with actual deployment to prevent data loss.
First take a dump of running servers database.
Then perform these instructions.
And then start server pointing to updated database and test throughly to verify that we do not have any issues.
Once you are confident with process you may schedule it for server maintenance time window.
Since table entry deletion may take considerable amount of time its advisable to test dumped data before actual cleanup task.



Stored procedure to cleanup tokens

  • Back up the existing IDN_OAUTH2_ACCESS_TOKEN table.
  • Turn off SQL_SAFE_UPDATES.
  • Delete the non-active tokens other than a single record for each state for each combination of CONSUMER_KEY, AUTHZ_USER and TOKEN_SCOPE.
  • Restore the original SQL_SAFE_UPDATES value.

USE `WSO2AM_DB`;
DROP PROCEDURE IF EXISTS `cleanup_tokens`;

DELIMITER $$
CREATE PROCEDURE `cleanup_tokens` ()
BEGIN

-- Backup IDN_OAUTH2_ACCESS_TOKEN table
DROP TABLE IF EXISTS `IDN_OAUTH2_ACCESS_TOKEN_BAK`;
CREATE TABLE `IDN_OAUTH2_ACCESS_TOKEN_BAK` AS SELECT * FROM `IDN_OAUTH2_ACCESS_TOKEN`;

-- 'Turn off SQL_SAFE_UPDATES'
SET @OLD_SQL_SAFE_UPDATES = @@SQL_SAFE_UPDATES;
SET SQL_SAFE_UPDATES = 0;

-- 'Keep the most recent INACTIVE key for each CONSUMER_KEY, AUTHZ_USER, TOKEN_SCOPE combination'
SELECT 'BEFORE:TOTAL_INACTIVE_TOKENS', COUNT(*) FROM IDN_OAUTH2_ACCESS_TOKEN WHERE TOKEN_STATE = 'INACTIVE';

SELECT 'TO BE RETAINED', COUNT(*) FROM(SELECT ACCESS_TOKEN FROM (SELECT ACCESS_TOKEN, CONSUMER_KEY, AUTHZ_USER, TOKEN_SCOPE FROM IDN_OAUTH2_ACCESS_TOKEN WHERE TOKEN_STATE = 'INACTIVE') x GROUP BY CONSUMER_KEY, AUTHZ_USER, TOKEN_SCOPE)y;

DELETE FROM IDN_OAUTH2_ACCESS_TOKEN WHERE TOKEN_STATE = 'INACTIVE' AND ACCESS_TOKEN NOT IN (SELECT ACCESS_TOKEN FROM(SELECT ACCESS_TOKEN FROM (SELECT ACCESS_TOKEN, CONSUMER_KEY, AUTHZ_USER, TOKEN_SCOPE FROM IDN_OAUTH2_ACCESS_TOKEN WHERE TOKEN_STATE = 'INACTIVE') x GROUP BY CONSUMER_KEY, AUTHZ_USER, TOKEN_SCOPE)y);

SELECT 'AFTER:TOTAL_INACTIVE_TOKENS', COUNT(*) FROM IDN_OAUTH2_ACCESS_TOKEN WHERE TOKEN_STATE = 'INACTIVE';

-- 'Keep the most recent REVOKED key for each CONSUMER_KEY, AUTHZ_USER, TOKEN_SCOPE combination'
SELECT 'BEFORE:TOTAL_REVOKED_TOKENS', COUNT(*) FROM IDN_OAUTH2_ACCESS_TOKEN WHERE TOKEN_STATE = 'REVOKED';

SELECT 'TO BE RETAINED', COUNT(*) FROM(SELECT ACCESS_TOKEN FROM (SELECT ACCESS_TOKEN, CONSUMER_KEY, AUTHZ_USER, TOKEN_SCOPE FROM IDN_OAUTH2_ACCESS_TOKEN WHERE TOKEN_STATE = 'REVOKED') x GROUP BY CONSUMER_KEY, AUTHZ_USER, TOKEN_SCOPE)y;

DELETE FROM IDN_OAUTH2_ACCESS_TOKEN WHERE TOKEN_STATE = 'REVOKED' AND ACCESS_TOKEN NOT IN (SELECT ACCESS_TOKEN FROM(SELECT ACCESS_TOKEN FROM (SELECT ACCESS_TOKEN, CONSUMER_KEY, AUTHZ_USER, TOKEN_SCOPE FROM IDN_OAUTH2_ACCESS_TOKEN WHERE TOKEN_STATE = 'REVOKED') x GROUP BY CONSUMER_KEY, AUTHZ_USER, TOKEN_SCOPE)y);

SELECT 'AFTER:TOTAL_REVOKED_TOKENS', COUNT(*) FROM IDN_OAUTH2_ACCESS_TOKEN WHERE TOKEN_STATE = 'REVOKED';


-- 'Keep the most recent EXPIRED key for each CONSUMER_KEY, AUTHZ_USER, TOKEN_SCOPE combination'
SELECT 'BEFORE:TOTAL_EXPIRED_TOKENS', COUNT(*) FROM IDN_OAUTH2_ACCESS_TOKEN WHERE TOKEN_STATE = 'EXPIRED';

SELECT 'TO BE RETAINED', COUNT(*) FROM(SELECT ACCESS_TOKEN FROM (SELECT ACCESS_TOKEN, CONSUMER_KEY, AUTHZ_USER, TOKEN_SCOPE FROM IDN_OAUTH2_ACCESS_TOKEN WHERE TOKEN_STATE = 'EXPIRED') x GROUP BY CONSUMER_KEY, AUTHZ_USER, TOKEN_SCOPE)y;

DELETE FROM IDN_OAUTH2_ACCESS_TOKEN WHERE TOKEN_STATE = 'EXPIRED' AND ACCESS_TOKEN NOT IN (SELECT ACCESS_TOKEN FROM(SELECT ACCESS_TOKEN FROM (SELECT ACCESS_TOKEN, CONSUMER_KEY, AUTHZ_USER, TOKEN_SCOPE FROM IDN_OAUTH2_ACCESS_TOKEN WHERE TOKEN_STATE = 'EXPIRED') x GROUP BY CONSUMER_KEY, AUTHZ_USER, TOKEN_SCOPE)y);

SELECT 'AFTER:TOTAL_EXPIRED_TOKENS', COUNT(*) FROM IDN_OAUTH2_ACCESS_TOKEN WHERE TOKEN_STATE = 'EXPIRED';

-- 'Restore the original SQL_SAFE_UPDATES value'
SET SQL_SAFE_UPDATES = @OLD_SQL_SAFE_UPDATES;

END$$

DELIMITER ;


Schedule event to run cleanup task per week
USE `WSO2AM_DB`;
DROP EVENT IF EXISTS `cleanup_tokens_event`;
CREATE EVENT `cleanup_tokens_event`
    ON SCHEDULE
      EVERY 1 WEEK STARTS '2015-01-01 00:00.00'
    DO
      CALL `WSO2AM_DB`.`cleanup_tokens`();

-- 'Turn on the event_scheduler'
SET GLOBAL event_scheduler = ON;


These scripts initially created by Rushmin Fernando(http://rushmin.blogspot.com/). Listing here to help API Manager users.

Empowering the Future of API Management: Unveiling the Journey of WSO2 API Platform for Kubernetes (APK) Project and the Anticipated Alpha Release

  Introduction In the ever-evolving realm of API management, our journey embarked on the APK project eight months ago, and now, with great a...