technology product build question

Post Reply
gioppo
Posts: 26
Joined: 20 Mar 2009, 13:34

can you give an example of a tech product build (and role)?
Thanks
Luca
jason.powell
Posts: 32
Joined: 04 Feb 2009, 15:01

Hi Luca,

We have created a Protege project containing a simple example of Technology Product Builds and Technology Build Roles. It can be downloaded here.

The example demonstrates how two different configurations of Technology Products (Technology Product Builds) are used as a Java Web Application Platform (Technology Composite).

The Technology Product Build Role provides the means to create this relationship and to define additional contextual information (e.g. whether the configuration is for production use or is in the process of being decommissioned).

In the example project, if you navigate to Technology_Product_Build in the Protege Class Browser, you will find a Windows-based configuration that is used for pilot implementations of a Java Web Application Platform and a Unix-based configuration that is used for production implementations.

Jason
(Essential Project Team)
Essential Project Team
schulmkg
Posts: 35
Joined: 02 Aug 2011, 18:46

OK I'm confused

In looking at the Metamodel documentation, it indicates that Technology Product and Technology Product Build are both Subclasses of Technology_Provider.

Yet when I open up my hierarchy, Technology_Product is under Technlogy_Product_Architecture type.

Why this matters is that we are trying to build a quick and dirty - but reusable and expandable "technology Inventory Catalogue (ala TOGAF) for a set of Solution Offerings our company provides.

Your documentation suggests the quickest way to do this is to use "application Deployments" to map applications to the technology layer. So we are taking the hypothetical position that a "Solution" that has been "deployed" to our "Solution Portfolio" is a "deployment" and that the technologies specified in the architecture docs in the Portfolio are the Instance deployments of technology.

Not actually a bad mapping since although most of our solutions are ostensibly "technology agnostic" most have been physically deployed with identical patterns.

So we are filling in
Local Name
Deployment of App Provider
Contained App Deployments (where applicable)
Deployment Group (taxonomy within the Portfolio)
Depolyed Application Instances

And within Deployed App Instances, we are enuperating the
Dependent on Technology Instances subclass

And then within Dependent Upon Technology Instances, we are filling in the "Instance of Technology Provider" and hoping to point that back to Technology Product.

Except since Technology Product is NOT under Technology Provider but is instead under

Technology Product Architecture Type,

We are not getting any mappings between the Technology Products being used in the App Deployment and the App deployment itself.


Is there a better way to do this?
schulmkg
Posts: 35
Joined: 02 Aug 2011, 18:46

More investigation leads me to be even more confused.

Somehow, Technolog Product has moved from where it belongs in the EA Class hiearch to where it does not belong.

Is there some way that I can manually intervene to fix this??!!!???
jason.powell
Posts: 32
Joined: 04 Feb 2009, 15:01

Firstly, from what we can see from your original post, your approach makes sense.

When looking to map Technology Instances back to Technology Providers (i.e. Technology Products or Technology Product Builds), "Instance of Technology Provider" is indeed the correct field to populate.

With regard to your question of where "Technology Product" should be in the EA Class hierarchy, you will see from the screenshot below, it should be a sub-class of Technology Provider. In addition, when clicking on the "Add Instance" button of the "Instance of Technology Provider" field (step 2 in the screenshot), you should be presented with a selection of Technology Providers (either Technology Products or Technology Product Builds). If you are not seeing this, then it would seem that your EA Class hierarchy has been modified in some way from the original Essential Meta-Model baseline.

Image

Protege does indeed provide the facility to not only manipulate the Class hierarchy, but also compare two Protege projects (e.g. your current project and the original Essential Baseline Meta-Model) to identify any differences.

However, before taking any specific action, we would be keen to understand if your hierarchy does indeed look the same as the screenshot, and if not, whether you are aware of any modifications to the baseline hierarchy that have been made since you original installation of the tool.

Jason
Essential Project Team
schulmkg
Posts: 35
Joined: 02 Aug 2011, 18:46

jason.powell wrote:Firstly, from what we can see from your original post, your approach makes sense.

When looking to map Technology Instances back to Technology Providers (i.e. Technology Products or Technology Product Builds), "Instance of Technology Provider" is indeed the correct field to populate.

With regard to your question of where "Technology Product" should be in the EA Class hierarchy, you will see from the screenshot below, it should be a sub-class of Technology Provider. In addition, when clicking on the "Add Instance" button of the "Instance of Technology Provider" field (step 2 in the screenshot), you should be presented with a selection of Technology Providers (either Technology Products or Technology Product Builds). If you are not seeing this, then it would seem that your EA Class hierarchy has been modified in some way from the original Essential Meta-Model baseline.

Image

Protege does indeed provide the facility to not only manipulate the Class hierarchy, but also compare two Protege projects (e.g. your current project and the original Essential Baseline Meta-Model) to identify any differences.

However, before taking any specific action, we would be keen to understand if your hierarchy does indeed look the same as the screenshot, and if not, whether you are aware of any modifications to the baseline hierarchy that have been made since you original installation of the tool.

Jason
Thanks Jason. the problem is that SOMEHOW the "technology Product" subclass has been moved from
Technology_Logical -> Technology_Provider->Technology_Product
to
Technolog_Logical->Technology_Product_Architecture_Type->Technology_Product

This was not an INTENTIONAL CHANGE, and I need guidance how to undo it. We had it the otherway around up through last week. And I'm not sure what changed: See attached screen shot of what it looks like now

[hmm - gotta figure out how to upload images. Coming shortly


.. Update

I figured out how to "drag and drop" classes from inside the Instance Tree view of the ontology editor. I think I need to disable the Instance Tree tab - because we are using a virtual server that we access remotely and we sometimes get some "lag"(jitter) in our mouse commands.

OK but back to the original question. we are still having problems getting a "back trace" from a particular technology back to the parent application in the Technology Report
User avatar
jonathan.carter
Posts: 1087
Joined: 04 Feb 2009, 15:44

Sounds like you've dragged Technology Product back to where it should be. With it in the wrong place, this may have prevented you from defining some of the relationships that you need but otherwise dragging it back to where it should be (as a direct subclass of Technology Provider) should put you back where you need to be.

The out-of-the-box Technology Product View provides the back trace that I believe you are after, showing all the Applications that depend on this Technology Product.

To provide this, the View navigates the model from the Technology Product to the Application via the Technology Product Build (an architecture of Technology Products) and the deployment of the Application(s) that depend on that Technology Product Build.

To make this work, starting with the Application:
  • define an Application Provider for the application.
  • define a deployment for this Application Provider (e.g. Production).
  • define the Technology Product Build that supports the application. This might be a 'common' or 'standard' build or a specific product architecture for the application.
  • Define the relationship between the Application Deployment and the Technology Product Build.
Your model now links the application to the set of Technology Products that it depends on, and you can see this from the Technology Product summary View.
In addition, if you specify the set of Technology Capabilities that the Application Provider requires, these are used by the Application Summary View to organise the rendering of the set of Technology Products that the application depends on (coming at it from the opposite end - application->technology rather than technology->application)

When defining the Technology Product Build, make sure to connect each element on the diagram with the Technology Product that is being used. It's not enough just to label each item on the diagram as 'MS Windows', you have to model it.

If you have any problems defining the relationships that I've described above, let me know. It could be that more classes have moved around to where they shouldn't be. That's a good idea to disable the Instance Tree tab.

Jonathan
Essential Project Team
schulmkg
Posts: 35
Joined: 02 Aug 2011, 18:46

jonathan.carter wrote:Sounds like you've dragged The out-of-the-box Technology Product View provides the back trace that I believe you are after, showing all the Applications that depend on this Technology Product.
OK its not quite working but I suspect that I'm missing some linkages. See below
To provide this, the View navigates the model from the Technology Product to the Application via the Technology Product Build (an architecture of Technology Products)
Not sure I understand this Nor quite where to finde it
  • define an Application Provider for the application. we've done this
  • define a deployment for this Application Provider (e.g. Production). I think we've done this, except I think this goes the other way: An Application_Deployment contains in its template a link to the Deployment of Application Provider which we have linked in. But it only has fields for
    • Name,
    • Desc,
    • Provides_app_svc,
    • Provides_app_func_impl,
    • Defining_static_arch,
    • High_lvl_Sftware_arch,
    • Depends_on_tech,
    • Type_of_appl,
    • Purpose,
    • Physical_deployments,
    • Lifecycle_Status,
    • Supplier,
    • Operates_on_info_Repre,
    • Code_Base,
    • Biz, IT owners and contact,
    • Req._Tech_capabil.
    • Supported Biz proc,
    • and external links
    [/b]
  • define the Technology Product Build that supports the application. This might be a 'common' or 'standard' build or a specific product architecture for the application.So this is what I don't understand. I have no idea how to like a Tech_Product_build to anything meaningful.
  • Define the relationship between the Application Deployment and the Technology Product Build.and how do you do this? I see nowhere how to do this


Your model now links the application to the set of Technology Products that it depends on, and you can see this from the Technology Product summary View.

Nope. There is NOTHINg in the App view (and there seems to be no way to get to the Application deployments views)

In addition, if you specify the set of Technology Capabilities that the Application Provider requires, these are used by the Application Summary View to organise the rendering of the set of Technology Products that the application depends on (coming at it from the opposite end - application->technology rather than technology->application)


When defining the Technology Product Build, make sure to connect each element on the diagram with the Technology Product that is being used.

Diagram?!!?? This needs to be able to be entered textually, We have over 180 applications we are dealing with, we don't have the time to draw anything for them in the initall pass
User avatar
neil.walsh
Posts: 444
Joined: 16 Feb 2009, 13:45
Contact:

Ok...

To define a deployment...
1. Choose the Application Provider you wish to define a deployment for.
2. In the form for the Application Provider, find the Physical Deployment field and choose Add New
3. You should now have a new form to create a deployment
4. Give the deployment a local name e.g. MyApp (London) and a Deployment Role e.g. Production
5. Now create a new Application Deployment Technical Architecture and in the resulting popup choose Technology Product Build
6. Populate the Technology Product Build as appropriate including the Technology Provider Architecture (this is the diagram that Jon was talking about)

This should point you in the right direction...

Reading back over this post brings up a few things to me though...

With regards to creating diagrams, Essential only uses diagrams where it is easier to model using these rather than via textural elements. Many other tools model entirely using diagrams which can be difficult and time consuming. Hopefully Essential has found the right balance. Feedback on what you think works and what doesn't work for you is always welcomed. If you can think of a better way of doing something then feel free to propose a change. Just add something to the forum!

Also, have you considered some training? I know people who have benefited hugely from this and it's accelerated their projects. A couple of days of training can save you weeks of fumbling around. The project sponsors, EAS, have a link on the homepage to training services.

The same is true of support, EAS can also provide you with some help to make sure your getting the most from the toolset and share best practice experience. Forums are a great place to share issues and get answers but it's never the same as getting some proper assistance. Maybe something to think about if you have any time constraints on your project...

Hope this helps

Neil
schulmkg
Posts: 35
Joined: 02 Aug 2011, 18:46

To define a deployment...
1. Choose the Application Provider you wish to define a deployment for.
2. In the form for the Application Provider, find the Physical Deployment field and choose Add New
3. You should now have a new form to create a deployment
4. Give the deployment a local name e.g. MyApp (London) and a Deployment Role e.g. Production
So far so good. We created that somewhat backwards but we have that data

Now the problem What we did following the guidance in the Tutorial was to use the Deployed Application Instances field to create a “Deployed Application Instance” that is “deployed” to our Portfolio Center

Within the Deployed App Instance under Dependent on Technology Provider (rather logical it seemed) I added a link to either Hardware, Software or Infrastructure Software Instances.

For Example
  • Microsoft Windows 7 – as an Instance of Infrastructure Software Instance
    • The MSFT Win 7 Infr Sftwre Instance then invokes Instance of Technology Provider to be the Instance of Technology Product
      • Microsoft Windows 7 as the Product Name and Desktop Operating system as the Role
So why then can’t the query go back up that tree?
5. Now create a new Application Deployment Technical Architecture and in the resulting popup choose Technology Product Build
6. Populate the Technology Product Build as appropriate including the Technology Provider Architecture (this is the diagram that Jon was talking about)
And this is precisely where having a diagram is a catastrophe. I’ve got 182 applications described in the above manner (ie via the Deployed Application Instance) that I would have to go back and DRAW stuff for when I have no use for the drawings. I just want to enumerate a list of the technology dependencies that a particular App has.
This should point you in the right direction...
I guess what sent us off on the wrong direction is the App tutorial which specifically says:

Application Deployments define the package of software components that make up an application in a particular environment, e.g. development or production.

Which made us think that this was the way to map software components back into Applications in the quickest manner possible since we were not going to be distinguishing between different types of deployments outside of our portfolio


So I now need to figure out the best way to be able to extract the kinds of reports we need – either by figuring out whether I can rework the existing reports, or if there is some automated mechanism for moving the dependencies we have built out in the Application Deployment Instance up into your Technical Architecture - But right now the reports don’t do what I was hoping they would do. I can find the information to build the reports by walking the tree, but the automated reports don’t do what they need to.

Frankly, anything that requires drawing does not scale easily and well to large scale efforts. And anything that can be described in a drawing can also be described in XML (which of course you know since that’s what you do) and hence a tabular or dialogue driven format.

Training someone to use dialogues or tables is much easier than to get them to put together drawings correctly. (hell I still find myself double thinking which way an arrow is supposed to go in a Use Case)

Unfortunately I have no budget for hiring your team at the current time. Wish I did, but I don’t.
User avatar
jonathan.carter
Posts: 1087
Joined: 04 Feb 2009, 15:44

I think there's still a bit confusion in here about the different links between application things and the technology things.

What you've described about the Application Deployment Instances and the hardware instances, infrastructure software instances is working at the Physical level. What this describes is the set of actual instances of technology products, physical servers etc. that support an Application - and I think this is probably more detail than you are looking for.
If you do need to model at this level of detail, just make sure you've connected the relevant instances to the Technology Node (e.g. a server) on which they are deployed and the out of the box Views for Technology Nodes will show all these dependencies.

Typically, before we model things at the physical technology level, we describe the technology that supports the Application Deployments.
Why do we map the Technology Products to Application Deployments (that is logical technology to physical application elements)? We do this because we might need to or want to use different technology products in the production application environment compared to the development or test environment - even if it's just using smaller servers.

The idea, though, is that we can define standard builds of technology (Technology Product Builds) and then map these to Application Deployments as required. So, many Application Deployments could map to the same Technology Product Build (in the Application Deployment Technical Architecture field of the Application Deployment). Of course, each application might have a different technical architecture.

You can define the Technology Product Build without having to define an architecture - e.g. you can say that the Application Deployment has a particular Technology Product Build - without having to graphically model all the Technology Products that are involved in that architecture but that doesn't tell us which specific Technology Products are supporting the application but sometimes that's OK.

Why have we got a graphical model for defining the architecture of the Technology Product Build (the set of Technology Products and the dependencies between them) rather than a tabular or textual form? Well, it's because the build is more than just a list of products. We need to be able to (optionally) capture the dependencies between them and doing that graphically is far easier than via textual forms.

However, if you are entering a lot of data about the technologies supporting your applications and the 'manual' form entry is proving too onerous, it can be worth looking at some of the automation and integration tools. If you're comfortable with it, the Script Console tab can be very useful for scripting repeatable modelling activities. Additionally, if you have your source data in an XML form, there's the Integration Tab or the Data Load tool contributed by Clint Cooper. We also have some graphical importing tools coming soon.

Something to bear in mind about the out-of-the-box Views and Reports provided in Essential Viewer is that these are just some possibilities. The idea with Viewer is to provide a toolkit for building the Views that you need, rather than provide all the Views that all possible users would ever need. So, if the out-of-the-box Views are not showing what you need with respect to the Application, we'd be happy to help you to put together the View you need.

Jonathan
Essential Project Team
schulmkg
Posts: 35
Joined: 02 Aug 2011, 18:46

Thanks Jonathan. I appreciate the explanation. I kinda had reached that understanding - and perhaps the mistake I made in my approach was that what I needed was a quick way to collect technology to application mappings.

and precisely because I lacked the architectural topological info I wanted to restrict this particular view to just our "portfolio deployment". That said I don't think I quite understood the intracacies and limitations of this up front.

One of the biggest challenges of EA is getting management buy-in. And so I think many projects do a "quick and dirty" first pass. whereas your tutorial is largely oriented around how to build a completely modeled system.

I think that a couple of "if your Goal is X - here is how you start quickly" would be more helpful. Because regardless, the complexity of any EA system requires some "playing around with" the tooling to understand the nuances of the particular implementation.


As for the drawing input. I would suggest that because a lot of existing systems architectures are documented in either UML or Viso, having a UML and/or Visio import/export would be desireable. Otherwise you are having people redraw the architectures with concomittant transcription errors. I realize Visio is not in the Open Source realm of thinking, but fundamentally Microsoft Office is the defacto standard of businesses.


As to my problem - the gotcha - as I understand the existing system - is that in mapping to a technology node, I have to do that for a particular software instance.

that means that of the 75 or so Application instances that use Windows Server as part of the solution offering, I have to create 75 "Infrastructure Software Instances" of Windows Server on 75 different technology nodes.

and since I have an average of 5 Depends on Technology instance per each of the 180 applications, I'm looking at a combinatorial explosion

My ideal is to figure out how to transfer the elements in "Depends on Technology Instance" to the concomittant Technology Product Build.

i'm doing some experiments right now with the dataset to see how feasible it is to write an XSLT script to simply retag the data that way.
User avatar
jonathan.carter
Posts: 1087
Joined: 04 Feb 2009, 15:44

We appreciate the requirement for quick and dirty but this shouldn't be confused with wooly or poor quality models - these don't really help anyone by supplying bad information. Modelling in Essential is building a knowledge base rather than a diagram repository and "It's sort of like this…" (that we all do in Powerpoint or the like) doesn't really work in that context, so it's important that whatever is captured is accurate enough.

And accurate enough is the operative phrase. We've designed the meta model to enable modelling of just the relevant areas to answer the questions / provide the information that you require at the moment in order to add some value or solve an issue for someone (often the business) by providing them with insights that they couldn't gain before hand. This is how EA can get the buy-in from management by demonstrating some real value rather than purely presenting architectural models that can often be difficult for those outside the EA team to interpret. This is why we have designed Essential Viewer to provide the ability to present views of the model that make sense to the relevant stakeholders. It's often not appropriate to expose people outside the modelling team to the model in Protege.

It could be that we need to start with a broad but shallow capture of the technology landscape and we can do that without having to worry about the Information or Business Process elements. If there are things that we don't have the detail on yet, we can just create the definition as a 'black box' and come back to it later. The model manages all this for us so that we can incrementally populate the model even in a scatter-gun approach if we need to. Focussing on what we need to present / provide insights on gives us the scope of the meta model that we need to use and ensures that we don't have to model the entire enterprise architecture!

Thanks for the suggestion for future tutorials - this kind of scenario-based approach is something that we have also identified as being very useful and we plan to produce those. Obviously, there are a lot of potential scenarios, so there will ultimately be a lot of tutorials!

I take your point on the ability to import from diagramming tools. There have been some posts on the forum about importing XMI from UML tools such as Sparx. The trouble with any such import (or any import for that matter) is that there need to be clear semantics about what each element means. None of these tools provide standard mechanisms for doing this and the variance of how such models are created means that every import is pretty much a custom import - unless you can define some clear modelling guidelines, e.g. in Visio or UML. I think the more rigourous modelling of UML is probably better than free-form diagrams like Visio - e.g. you could standardise on the use of UML Stereotypes to identify what each UML class on the diagram means. The bottom line is that there can't really be a "Import->Visio Diagram" generic capability without defining the semantic mapping between the source diagram and the Essential Meta Model.

If I understand your modelling 'gotcha' correctly, you have 75 different Application Providers to model and you want to model the set of technologies that each Application Provider depends on.
To do this, you do not need to go to the Physical Technology view of things. Rather for each Application Provider, you can define a 'default' Application Deployment (typically the production deployment) and then associate the relevant Technology Product Build with that deployment. Where your Application Providers have the same technology dependencies, you can define a 'shared' Technology Product build and re-use that from each appropriate Application Deployment. Worst case scenario, you have to define 75 Technology Product Builds but if each truly has separate technology architectures, that's just a fact of life and we have to model that. As I mentioned before, there are some things we can do during imports, for example, to create many of the modelling instances as derived instances, such as the Application Deployments and the Technology Product Builds.
Note that the Technology Product Build is a 'black box' definition and it's in the 'architecture' of that Technology Product Build that we define the set of Technology Products that are used in that build and - if we need to - the relationships between those.

This might seem to contradict the software architecture components mentioned in the tutorial but actually it's complementary. The meta model allows us to model the software components that are involved in delivering the application and we can relate those components to the supporting technology but we don't have to. If all we need to understand is the set of Technology Products that provide the platform for the application, then the Technology Product Build related to the Application Deployment does that for us - and ultimately, if we model everything, then it all joins up. But modelling everything is not mandatory.

You've mentioned the Technology Instances and Technology Nodes and I'd just like to check whether those are really helping you or not. If I understood your requirement above correctly, I'm not sure you need to go to this level of detail, which is about the physical, deployed technology in terms of actual servers (not just types of servers), physical deployment topologies and so on. This is very useful if we need to understand the current utilisation of the physical infrastructure, e.g. which servers can we turn off?, what happens if this server fails?, but is more detail than we require if the focus is on the types of technology and the products that we are using to support the applications.

In terms of transferring the elements that you have as Technology Instances to Technology Builds, this should be relatively straight-forward. Most of those Technology Instances are going to be instances of Infrastructure Software Instance, each of which is an instance of a Technology Product. I think it might be worth seeing whether any of your 75 applications share a common technology platform before creating a build for each that uses the Technology Instances that you've got because if you can share Technology Product Builds, that will save a lot of modelling time.

Keep us posted as to how things are going and thanks for your feedback. Don't hesitate to post if you have any more questions or comments.

Jonathan
Essential Project Team
schulmkg
Posts: 35
Joined: 02 Aug 2011, 18:46

I'm going to split this up into three replies since otherwise it is getting too cumbersome.

On the Quick-And-Dirty approach, the problem with 'black-boxing' is that you are still committing to a particular mapping which may turn out to not be accurate. and undoing an existing mapping to a 'black box' is additional work. i'd rather have dangling references that result in no data showing up in a report. I understand you need a balance here, but in part the "black boxing" approach is what has gotten us into trouble here.

Furhtermore in essence you are talking about a Waterfall style approach to design. Which, while having its strengths, is currently not very much in favor with technical management. instead a more incremental Scrum - 90 Day Rapid Action plan approach is required (the latter is what we had).

So I'm not sure you even need to rewrite tutorials as much as provide some "Rapid Action Plan" guidance along the lines of "If your initial Goal is X - start here, and ignore these things. If it is Y, start there and ignore those things."
schulmkg
Posts: 35
Joined: 02 Aug 2011, 18:46

on the Visio/UML import question I don't think you can really make the arguement that UML is dramatically more rigorous. UML has a number of ambiguities within it, and every XMI I've worked with has required manual "fixups" when moving between tools of different vendors.

So you could provide a default mapping config file within which users could edit the mappings to conform to their model. My recommendation would be to start with a standard like SysML rather than a particular tooling (like Sparx). the advantage of SysML is that you are getting folks like IBM as well as Microsoft participating in it. So while there will still need to be fixups for particular tooling implementations, it provides a solid starting place
schulmkg
Posts: 35
Joined: 02 Aug 2011, 18:46

OK onto my problem at hand. No the issue is not that I have 75 different Application Providers.

The challenge is that because we are not modeling applications per se, but rather a more abstract model of Application Offerings in a portfolio that we sell - IE not for internal IT management but for creating a more coherent OFFERING architecture - and because the hierarchy of the Portfolio was already set for us in ways that is not very compatible with EA perspectives (inheritted, with zero changeability) we opted to not hammer square pegs into round boxes.

So instead we chose to define an application that appears in our Portfolio Center as having been 'deployed to the Portfolio center' and thus capture the data from the perspective of

Application Deployments.

This seemed the right way to go because we could
1) have contained application deployments - which met how our Portfolio is architected in places

2) map directly to "deployment Instance", which in turn had a "Depends on Technology" attribute, which is exactly what we were trying to model:

Applications deployed to the Portfolio Center who have implementation dependencies on particular technologies.


So what I now have is ALREADY BUILT

182 Application Deployments

Each linked to
a Deployed Application Instance
0 or more "Contained Application Deployments"
an Application Provider.

The App Provider simply has the Production/Sunset/Retired status tracked

the Deployed Application Instance has
Instance of Technology Provider tracked - so that we can track the "Role" of the offering (since our offerings are often in market competition with other vendors)

Is Dependent On Technology Instances tracked

Most of the Technology Instances are Infrastructure Software Instances. but we have some hardware dependencies (particular kinds of switches, cpus etc). All these in turn are linked to their own Instance of Technology Provider

Here we also track whether the ITP is "required" or "optional" in the Instance Deployment Status (we extended the enumeration to contain these) - since some of our deployments can be deployed on divergent tech platforms.


So one "instance of Infrastructure Software Instance" can map to multiple "Deployment Instances" each of which is a Different Application Deployment. At a minimum each Sofware instance maps to at least one Deployment, at most to about 75 of the 180.

so what I need to do is to figure out a way to

EITHER
create 180 Technology Product Builds populated from the "Dependent on Technolgy Instance" attribute field

OR to figure out some way to build a report that walks this tree from the App Deployment level down to the underlying Instance of Technology Provider.
User avatar
jonathan.carter
Posts: 1087
Joined: 04 Feb 2009, 15:44

schulmkg wrote:I'm going to split this up into three replies since otherwise it is getting too cumbersome.

On the Quick-And-Dirty approach, the problem with 'black-boxing' is that you are still committing to a particular mapping which may turn out to not be accurate. and undoing an existing mapping to a 'black box' is additional work. i'd rather have dangling references that result in no data showing up in a report. I understand you need a balance here, but in part the "black boxing" approach is what has gotten us into trouble here.

Furhtermore in essence you are talking about a Waterfall style approach to design. Which, while having its strengths, is currently not very much in favor with technical management. instead a more incremental Scrum - 90 Day Rapid Action plan approach is required (the latter is what we had).

So I'm not sure you even need to rewrite tutorials as much as provide some "Rapid Action Plan" guidance along the lines of "If your initial Goal is X - start here, and ignore these things. If it is Y, start there and ignore those things."
We are totally on-board with the idea of having dangling definitions, too. Absolutely right. It's just that the black-box approach enables us to start making coarse-grain relationships between things - really useful for things where they are provided by a 3rd party and we don't know the details of how it's put together.

We usually work in short 3, 6, 12 week iterations that partially populate the model and elaborate or extend scope as required. Again the meta model and tool help us to join the dangling things back up when we need to. I think we agree in terms of the types of ways we need to be able to work and we certainly work with the tool in the way you've described.

I like the sound of what you're suggesting about the tutorials. We had some internal documentation that morphed into some of the tutorials but we probably elaborated during that morphing. I think what you're suggesting might be more like a couple of paragraphs that describe modelling concepts rather than detailed how-to?
Essential Project Team
User avatar
jonathan.carter
Posts: 1087
Joined: 04 Feb 2009, 15:44

schulmkg wrote:on the Visio/UML import question I don't think you can really make the arguement that UML is dramatically more rigorous. UML has a number of ambiguities within it, and every XMI I've worked with has required manual "fixups" when moving between tools of different vendors.

So you could provide a default mapping config file within which users could edit the mappings to conform to their model. My recommendation would be to start with a standard like SysML rather than a particular tooling (like Sparx). the advantage of SysML is that you are getting folks like IBM as well as Microsoft participating in it. So while there will still need to be fixups for particular tooling implementations, it provides a solid starting place
I certainly agree with you about XMI, however, it's something that we see most commonly. What we recommend is using Stereotypes that match our meta classes to describe in the UML model what each class in the UML model means. Then we can process the XMI mechanically and it means that you can do the semantic mapping within the source tooling. It's a step that can't be avoided.

I'll take a look at SysML - perhaps it has some higher level concepts (higher than Class and Relationship) that we can map some semantics to. The issue here, though, again is the variance in the use of the tools. For example, an in ERD, do we map the elements in the model to Data Objects, Data Representations, Information Views or Information Representations? It often depends on some a priori knowledge of the source model and that's the where the semantic mapping phase comes in for every import.

Jonathan
Essential Project Team
User avatar
jonathan.carter
Posts: 1087
Joined: 04 Feb 2009, 15:44

schulmkg wrote:OK onto my problem at hand. No the issue is not that I have 75 different Application Providers.

The challenge is that because we are not modeling applications per se, but rather a more abstract model of Application Offerings in a portfolio that we sell - IE not for internal IT management but for creating a more coherent OFFERING architecture - and because the hierarchy of the Portfolio was already set for us in ways that is not very compatible with EA perspectives (inheritted, with zero changeability) we opted to not hammer square pegs into round boxes.

So instead we chose to define an application that appears in our Portfolio Center as having been 'deployed to the Portfolio center' and thus capture the data from the perspective of

Application Deployments.

This seemed the right way to go because we could
1) have contained application deployments - which met how our Portfolio is architected in places

2) map directly to "deployment Instance", which in turn had a "Depends on Technology" attribute, which is exactly what we were trying to model:

Applications deployed to the Portfolio Center who have implementation dependencies on particular technologies.


So what I now have is ALREADY BUILT

182 Application Deployments

Each linked to
a Deployed Application Instance
0 or more "Contained Application Deployments"
an Application Provider.

The App Provider simply has the Production/Sunset/Retired status tracked

the Deployed Application Instance has
Instance of Technology Provider tracked - so that we can track the "Role" of the offering (since our offerings are often in market competition with other vendors)

Is Dependent On Technology Instances tracked

Most of the Technology Instances are Infrastructure Software Instances. but we have some hardware dependencies (particular kinds of switches, cpus etc). All these in turn are linked to their own Instance of Technology Provider

Here we also track whether the ITP is "required" or "optional" in the Instance Deployment Status (we extended the enumeration to contain these) - since some of our deployments can be deployed on divergent tech platforms.


So one "instance of Infrastructure Software Instance" can map to multiple "Deployment Instances" each of which is a Different Application Deployment. At a minimum each Sofware instance maps to at least one Deployment, at most to about 75 of the 180.

so what I need to do is to figure out a way to

EITHER
create 180 Technology Product Builds populated from the "Dependent on Technolgy Instance" attribute field

OR to figure out some way to build a report that walks this tree from the App Deployment level down to the underlying Instance of Technology Provider.

All makes sense. A 'typical' (in the nicest possible way!) current-state capture and these normally take a kind of bottom up approach in terms of how we model things. Kind of, because it is often most natural to start in the logical and work top-down-bottom-up from there to gather what you need from the information that you have about how things are in the 'architecture'.

To zoom in on the cross-roads that you're at, it would certainly possible to script the creation of the Technology Product Builds and the associated Technology Product Usages in the related Technology Product Build Architecture.

However, I think the first question to consider is what you need to see on the View. If the OOTB Views are not really doing what you need, it may be best to take a look at putting together what you do need, based on what you've already modelled before doing the work to include the Technology Product Builds.

Without wanting to through a spanner in the works... I read with interest when you mentioned that what you are modelling are actually your 'product service' offerings. This is exactly what the Product (and related) class in the Business Layer are designed for...
However, I mention this as an aside / for your interest rather than trying to derail things! :)

Let us know if you have any queries about either scripting some of the modelling or the querying for the Views

Jonathan
Essential Project Team
schulmkg
Posts: 35
Joined: 02 Aug 2011, 18:46

jonathan.carter wrote:
schulmkg wrote:I like the sound of what you're suggesting about the tutorials. We had some internal documentation that morphed into some of the tutorials but we probably elaborated during that morphing. I think what you're suggesting might be more like a couple of paragraphs that describe modelling concepts rather than detailed how-to?
yeah that's pretty much it. EA is a big conceptual learning curve and figuring out where in the model to start a particular type of effort either requires some guidance or a comprehensive understanding.

Note also that one of the bits of documentation that is not obvious (at least to me) is what components are required to make which reports "useful". So when Mgmt says: "that's the report I want to see" short of digging into the XSLT/XPATH code, right now its not clear how to get there most optimally.
schulmkg
Posts: 35
Joined: 02 Aug 2011, 18:46

jonathan.carter wrote:
schulmkg wrote:I'll take a look at SysML - perhaps it has some higher level concepts (higher than Class and Relationship) that we can map some semantics to. The issue here, though, again is the variance in the use of the tools. For example, an in ERD, do we map the elements in the model to Data Objects, Data Representations, Information Views or Information Representations? It often depends on some a priori knowledge of the source model and that's the where the semantic mapping phase comes in for every import.

Jonathan
I understand the challenges, but don't let the perfect become the enemy of "good enough to help"... For example if every Visio model you imported simply imported the objects and any relationships in the diagram and flagged in RED (or through some other signifier) the relationships and objects you could not automatically translate, that would STILL save users a lot of time that currently is spent even just entering all the elements.

And if you got 50%-75% of the imported things "right" - we are now massively ahead of the game.

I think one of the things to recognize is that in many if not most organizations, these models are already out of date except at the physical implementation layer (where the IT Ops folks use them to actually fix things when they break).

So to some extent, relying on drawings that have linkages in them, conveys a false sense of precision that potentially is worse.
schulmkg
Posts: 35
Joined: 02 Aug 2011, 18:46

[quote="jonathan.carter]
All makes sense. A 'typical' (in the nicest possible way!) current-state capture and these normally take a kind of bottom up approach in terms of how we model things. Kind of, because it is often most natural to start in the logical and work top-down-bottom-up from there to gather what you need from the information that you have about how things are in the 'architecture'.

To zoom in on the cross-roads that you're at, it would certainly possible to script the creation of the Technology Product Builds and the associated Technology Product Usages in the related Technology Product Build Architecture.

However, I think the first question to consider is what you need to see on the View. If the OOTB Views are not really doing what you need, it may be best to take a look at putting together what you do need, based on what you've already modelled before doing the work to include the Technology Product Builds.

Without wanting to through a spanner in the works... I read with interest when you mentioned that what you are modelling are actually your 'product service' offerings. This is exactly what the Product (and related) class in the Business Layer are designed for...
However, I mention this as an aside / for your interest rather than trying to derail things! :)

Let us know if you have any queries about either scripting some of the modelling or the querying for the Views

Jonathan[/quote]

thanks Jonathan for the encouragement that I didn't send us off on a complete goose-chase.

I did look at the business modeling pieces, but the thing that put me off was looking at the reports that it generated - they didn't answer the questions that our use cases were pointing us at.

fundamentally what we are trying to do is to not only convince the CTO that this is useful, but to do so we have to convince the independent Biz units that own the various solutions, as well as Sales (which has a responsibility to sell across these BUs) a well as our global alliances teams that this is useful

so the questions we are trying to answer is:
  1. Which solutions/applications have direct dependencies on which technologies
    1. Does it use multiple vendors for a technology (ie MSFT SQL and Oracle as DBs)
    2. Are the technologies used "one offs" from a small vendor vs. technologies from a strategic partner
    3. Are the technologies "production" or "sunset" (Windows XP anyone??)
  2. what solutions are affected by a technology change
    1. Windows XP loses security update support in 6 mos- what solutions have to be updated
    2. What solutions could take advantage of our new relationship with an online CRM partner vs. on-prem installs
  3. what is the enumerated list of technologies REQUIRED vs "technology agnostic"?
    1. in bidding the offering, do I need to include costs associated with replatforming parts of the solution
    2. Do I avoid making this offer to the customer since they have a "religious" allergy to part of the required tech stack?
We also have an independent project underway that is looking at our global skills distribution, and a third one that is looking at what are the delivery requirements for a particular solution in a particular global region.

We want to be able to merge the tech requirements for a solution with the regional skills distributions and then map them against the demand plan and delivery requirements on a region by region basis so that we can optimize training requirements.

This all pivots on low level technology questions. the business aspects necessarily have to be tracked elsewhere because of the politics involved.

Meanwhile I'm still working on assessing whether I build custom scripts of move the "Depends on technology" to the Product Build role

thanks for the help though
User avatar
jonathan.carter
Posts: 1087
Joined: 04 Feb 2009, 15:44

schulmkg wrote:
jonathan.carter wrote:
schulmkg wrote:I'll take a look at SysML - perhaps it has some higher level concepts (higher than Class and Relationship) that we can map some semantics to. The issue here, though, again is the variance in the use of the tools. For example, an in ERD, do we map the elements in the model to Data Objects, Data Representations, Information Views or Information Representations? It often depends on some a priori knowledge of the source model and that's the where the semantic mapping phase comes in for every import.

Jonathan
I understand the challenges, but don't let the perfect become the enemy of "good enough to help"... For example if every Visio model you imported simply imported the objects and any relationships in the diagram and flagged in RED (or through some other signifier) the relationships and objects you could not automatically translate, that would STILL save users a lot of time that currently is spent even just entering all the elements.

And if you got 50%-75% of the imported things "right" - we are now massively ahead of the game.

I think one of the things to recognize is that in many if not most organizations, these models are already out of date except at the physical implementation layer (where the IT Ops folks use them to actually fix things when they break).

So to some extent, relying on drawings that have linkages in them, conveys a false sense of precision that potentially is worse.
Agree with the good-enough approach.
If you can get Visio to export XML with enough details that we can identify what each diagram element is supposed to be, then this is already possible.

Jonathan
Essential Project Team
User avatar
jonathan.carter
Posts: 1087
Joined: 04 Feb 2009, 15:44

schulmkg wrote:
thanks Jonathan for the encouragement that I didn't send us off on a complete goose-chase.

I did look at the business modeling pieces, but the thing that put me off was looking at the reports that it generated - they didn't answer the questions that our use cases were pointing us at.

fundamentally what we are trying to do is to not only convince the CTO that this is useful, but to do so we have to convince the independent Biz units that own the various solutions, as well as Sales (which has a responsibility to sell across these BUs) a well as our global alliances teams that this is useful

so the questions we are trying to answer is:
  1. Which solutions/applications have direct dependencies on which technologies
    1. Does it use multiple vendors for a technology (ie MSFT SQL and Oracle as DBs)
    2. Are the technologies used "one offs" from a small vendor vs. technologies from a strategic partner
    3. Are the technologies "production" or "sunset" (Windows XP anyone??)
  2. what solutions are affected by a technology change
    1. Windows XP loses security update support in 6 mos- what solutions have to be updated
    2. What solutions could take advantage of our new relationship with an online CRM partner vs. on-prem installs
  3. what is the enumerated list of technologies REQUIRED vs "technology agnostic"?
    1. in bidding the offering, do I need to include costs associated with replatforming parts of the solution
    2. Do I avoid making this offer to the customer since they have a "religious" allergy to part of the required tech stack?
Makes sense. We are expanding the set of out of the box Views to help give better coverage (and we're working on some big updates at the moment) - I appreciate that the Business area, especially around Products etc. is not well-covered by the Views.

The questions you've got there should be well covered by the approach that you're taking, though - without wanting to sound glib, we're looking at how technology supports applications. We've got some new features in the next release of the meta model that cover the dependencies that Technology Products have on other technologies, e.g. what operating systems are supported / not supported, which databases can we use with this product and which ones can't we use and the like.

I like the idea of modelling what particular customers will not accept and being able to produce Views that show which are the suitable solutions. Perhaps that's something that the new Technology Product Architecture stuff I just mentioned would tackle nicely.

As I think we've discussed, these questions should all be able to be addressed by modelling the relevant Application Logical and Physical and the Technology Logical elements. Although, if we want to know the impact to the Windows XP issue, we might need to know how many physical servers (and virtual ones) are affected, which does take us into Technology Physical elements.
schulmkg wrote: We also have an independent project underway that is looking at our global skills distribution, and a third one that is looking at what are the delivery requirements for a particular solution in a particular global region.

We want to be able to merge the tech requirements for a solution with the regional skills distributions and then map them against the demand plan and delivery requirements on a region by region basis so that we can optimize training requirements.

This all pivots on low level technology questions. the business aspects necessarily have to be tracked elsewhere because of the politics involved.

Meanwhile I'm still working on assessing whether I build custom scripts of move the "Depends on technology" to the Product Build role

thanks for the help though
I'm sure you've already noticed that we have an optional meta model extension to handle the modelling of Skills. This has been developed by our Essential Community Process (ECP) as ECP-5. This adds some constructs to the Business Layer to model which roles require which Skills and which Actors have those Skills. If you haven't already, you might like to take a look at that and see whether it might be useful or provide a basis for handling your needs. There's a small demo repository to help explain it.

Keep us posted as to how you get on, in particular about those migration scripts.

Jonathan
Essential Project Team
Post Reply