GreenVulcano Server and GVConsole

The GreenVulcano 4 ESB server is the beating heart of GV ESB suite, which contains all the features introduced in this documentation.
As a server, GV is capable of deploy and run the business application developed to run the business service activities required.

With the proper use of the GVConsole, the user will be able to keep track of the operational side of the business logic deployed within its whole lifecycle: deployment, testing, scheduling, log checking.

From the technical perspective, GV 4 ESB is developed in Java OSGI: all the components are Osgi bundles, which give to the whole environment flexibility and easy extensibility.
Each new bundled module (e.g. a new adapter) can be imported hot into the GV Environment, ensuring no impact to the business continuity.

Same kind of flexibility is inherited within the eventual addition of new service flows: a new set of service operations can be imported, deployed and executed hot.

Within this section we’ll see the following topics:

After the GV ESB server is installed , it is executed automatically from Karaf context: from the installation path, a user can run therefore the karaf executable file and wait for GreenVulcano® ESB to be up and running.

to ensure such simply run a simple 7

gvadmin@root()> feature:list | grep GreenVulcano

from the karaf console.

All the rows outputted are core and imported modules within GV ESB 4, with their present status.
Once all the statuses are in “Active”, the server is ready to be used, either from the gvconsole, from the task scheduled activities or the exposed on-demand services.

GV coding: simplicity is the keyword

GV coding relies on human readable XML-based documents, to define and maintain the business flows:
the internal framework provided and expressed within Developer Studio and used by GV ESB allow the users (devs and analysts) to design, code and maintain within a simple kind of writing.

This way of intend the code, as opposed to the usual spaghetti-like way (application packages, server containers with their start/stop cold lifecycle) allows a more efficient and dynamic paradigm, not anymore server-based, but service-based. Basically, each time a user adds a piece of component inside the Developer Studio flow, a new piece of document code is added to the GV tree.

Such GV tree is the core part of the deployed project, under the Karaf GV project path.

The standard GV tree is divided in different files to keep the development context separated and independent, and its component are the following:

  • GVCore.xml
    which contains the core part of the services expressed inside the project deployed
  • GVAdapters.xml
    which contains the adapter modules implemented to support such services
  • GVSupport.xml
  • [to be completed]

All is meant to keep the coding simple: the tree expressed into the Dev Studio Vulcon context is present in the proper XML document with the very same tree structure: a new node added within the Flow Designer, once saved the configuration will be added within the GVCore Vulcon tree structure and the GVCore.xml file, at the same point in the tree.

One last file is mandatory to a project, even if not part of the actual tree: the file is meant to contain all the configuration rows (key-value couples) referred onto the XML tree files: each time an xmlp is present it refers to its mate present into the properties file.

This is how GV externalize its property, in order to decouple variables/values and avoid awful hardcodings, always hard to keep track and maintain.

GV engine execution flow

As introduced, a GV service flow can be represented by a graph with 1 start and 1+ end nodes: when a service operation is run from the GV ESB, the inner GV core engine starts the execution right from the Start node, pursuing its decisional and operation process through the nodes involved in such thread (or session).

Main context which involves all the nodes defined by the framework is the GVBuffer, which is a container object, fit for all the types defined known to Java (and its set of library imported): this object is used as a vessel to transport the informations between nodes (within the connections wiring, if you will).

Another important context are the GVBuffer Properties, which might also be carried from one node to the next, during the session flow: these are defined to carry on a simple value known to the service operation.

The way to ensure transportation of such contexts between the nodes is to ensure the so called “input/output coupling”:
One node B inherits its working data set from its predecessor node A if A’s output label matches its flow next node B’s input label.

Each node involved (beside the non-processional, such as the conditional ones) possesses 3 distinct phases:

  1. Input
    this phase contains the arriving GVBuffer data to the node involved, and allows the application of a sub-operation, known as Input Services.
    The input also defines the label, uniquely present inside that precise flow, which mirrors the data present in that input context. Typical example: a DTE transformation applied to the input data

  2. Perform
    this phase contains the body of the process defining the node purpose, to apply to the arriving input, or the resulting Input Services’s data, if present.
    Typical example: a JS script body, which alters the GVBuffer and release it altered to the next phase, Output.

  3. Output
    this last phase allows a last data manipulation, as a form of “post-processing” alteration. The output also defines the label, uniquely present inside that precise flow, which mirrors the data present in that output context.
    Typical example: a DTE transformation applied to the input data.

With “uniquely present” is meant that an input/output label is treated as a unique inside a flow/subflow execution: each time an alteration is applied to such label the flow keeps track of it, so that the dev can keep track of it as a linear evolution of data, to carry on, alter, concatenate, delete etc.
Proof of these phases can be found inside the service log , which as a usual logging system keeps track of the flow execution.

The 3 phases are a basic way to organize a node single operation: usually data arriving to a node is read into the correct form (input phase) with a proper adaptation from the input format, then the result in processed (perform phase) to obtain the target manipulated data, to be ultimately re-transformed/adapted (output phase) trough the final presentation format.

Thinking in terms of flow of a sequence of nodes A, B and C all connected as a simple pipe of execution (meaning the input/output coupling is defined between A and B, as it is in between B and C), the final output resulting from the node C is nothing but the application of the function/node A on the initial input, followed by the application to its result by B and then C.

The “coupling” mechanism ensures the possibility of the final result applied (and logically equal) to an ordered application of a set of sub-functions (A, then B, then C).

Imagining this as an ordered sequence of 3 black boxes connected A, B and C makes it even easier to see it.
With this logic, a flow execution allows to, starting from an initial input and following the subsequent order of nodes executed, to create an ordered manipulation and management of such data based on the conditions and transformations added in the design phase, in order to reach the final goal.

[to be completed]

GVConsole access and sections usage

With GV 4 ESB up a user can easily reach the dministration console “GV Console”, by browser, reaching the URL:

http://[GV ESB IP number or resolved machine name]:[port]/gvconsole

In order to login the user should be part of a users group known to the GV Administrator: the user will be able to use the console according to its users group rights, therefore be able to access its features and controls.
Anyway, a default “gvadmin” role is always present within a vanilla GV 4 ESB installation, to let the admin access and start define/assign roles to the authorized people.

The GVConsole can give access to the following menu sections, given that the logged user has the proper rights within its group:

  • Users: used to add/edit/delete authorized group based credentials to the GVConsole
  • Scheduling: used to add/edit/delete time based automated programs for service operations which requires them
  • Deploy: used to add/remove GV based configurations containing the business services
  • Properties: used to add/edit/delete GVBuffer properties for placeholding purposes (e.g. database connections)
  • Execute: used to run the service operations chosen from the service operation list imported
  • Monitoring: used to display the statistical resource usage of the GV ESB server (Memory, Threads, Classes, CPU based)
  • Configuration: used to visualize services operations imported in GV ESB
  • Settings: used to add/edit/delete additional informations to the GV configuration
  • My profile: used to display the present logged user informations
  • Logout: used to log out and leave the present GVConsole session

The most relevant of these will be better explained in the upcoming sections.

Service deployment

Having followed the Quickstart guide pertaining the Developer Studio 2, and especially the project export/import section, we can get to the service deployment part.

From the browser reach the GV Console window, then press on the menu button “Deploy”, then press the “+” button, to import a new project configuration: finally select the new zip file just exported from the Dev Studio.

You’ll be then prompted for a name to give to the deployment, then apply for a fitting name for the project imported: if the project to be imported is the Echo/Normalize, we can name this “ECHO”. At this point the project is imported (and listed under the Deploy list) and it’s ready to be deployed, pressing the green Deploy button.

Same method can be applied to whatever project in hand, back and forth from GVConsole to Dev Studio, to pass almost seamlessly from testing to developing.

[to be completed]

Service execution

Once one project is deployed, its service operations can be executed from the Execute section, picked up from the main menu on the GVConsole.

From this section everything is available to run an operation:

  • Operation: the operation to run, to choose from the top down list
  • Body: the string payload which will be available to the operation as initial input within the input GVBuffer
  • Credentials: the credentials info in case the operation is configured with its Role and Group.
    Attention: service credentials and GVConsole access credentials are not overlying sets. Former is for GV authorized administration access, latter is for business access purposes.
  • Properties: as for the Body this allows to fill in a set key/value couples in form of GV properties, available as initial input
  • Output: this is the only non-interactive section, used to output the last node’s output
    Once all the sections needed for the service operations are filled in the operation can be executed with the blue Run button, leading to an (hopefully expected) output.

An important clarification: the Execute section can be meant for testing purposes, not for simulation purposes. They might sound somehow compatible, but they’re not: once executed, an operation which has fullfilled its goal will lead to a committed result.


and simpleReport are not persistant operations, in the way that their work is run and released at runtime on volatile memory, with no tangible side effects.
On the other hand, a successful operation leading to a database write (just to cite a real-scenario example) will commit its write.

Service usage: on-demand vs. scheduling

The Execution provides an explicit way of running an operation: when a service operation is instead exposed for external application it can be run from within its target means. Falls in this type scenario the simpleReport operation, which can be requested by an HTTP GET method, via a web browser, by simply pulling the correct target URL.

This type of solution is the mostly used when a set of operations is meant to be made available to the customer’s requests (on demand requests).
GV also allows to store a scheduling configuration for the service operations which need an automated sequence of execution.

Such sequence can be easily defined by a Quartz rule, which allows the user to decide the time based repetition, precise up to the second.
From within the GVConsole, the proper Scheduling section can hold up a multiple choice of time repetition rules for each operation.

Service logging

  • [to be completed]