Technology Architecture

Models for Phase D

Philippe Desfray , Gilbert Raymond , in Modeling Enterprise Architecture with TOGAF, 2014

Abstract

Technology architecture associates application components from application architecture with technology components representing software and hardware components. Its components are generally acquired in the marketplace and can be assembled and configured to constitute the enterprise's technological infrastructure. Technology architecture provides a more concrete view of the way in which application components will be realized and deployed. It enables the migration problems that can arise between the different steps of the IS evolution path to be studied earlier. It provides a more precise means of evaluating responses to constraints (nonfunctional requirements) concerning the IS, notably by estimating hardware and network sizing needs or by setting up server or storage redundancy. Technology architecture concentrates on logistical and location problems related to hardware location, IS management capabilities, and the sites where the different parts of the IS are used. Technology architecture also ensures the delivered application components work together, confirming that the required business integration is supported.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780124199842000100

The EAP Profile

Philippe Desfray , Gilbert Raymond , in Modeling Enterprise Architecture with TOGAF, 2014

15.7 Technology architecture (Figure 15.10)

Technology architecture deals with the deployment of application components on technology components. A standard set of predefined technology components is provided in order to represent servers, network, workstations, and so on ( Figure 15.11).

Figure 15.10. EAP profile for technology architecture.

Figure 15.11. EAP profile with a focus on the "Hardware Technology Component" metaclass.

TOGAF Element UML Mapping Icon Definition
Hardware Technology Component Node Abstract element Hardware element on which application components can be deployed
Technology Architecture Domain Package
Root of the technology model; package enabling a technology model to be structured
Server Node
Hardware platform that can be connected to other peripherals and on which application components will be deployed
Work Station Node
Workstations are linked by network connections to an information system. Application components can also be deployed on workstations
Internet Node
Internet access node or point
Router Node
Network router
Switch Node
Network switch
Network Node Node
Network node
Connexion Dependency
Network connection between peripherals or network nodes
Technology Artifact Artifact
Product resulting from enterprise architecture or IS development work; this can be a file, a technical library, and so on
Application Component Instance Instance
Occurrences are used to deploy application components; represented under the deployment context (for example, server)
Bus Node
Communication bus
NetworkLink Dependency
Network connection

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B978012419984200015X

Enterprise-Level Data Architecture Practices

Charles D. Tupper , in Data Architecture, 2011

Enterprise Technology Architectures

Enterprise-level technology architectures ensure that the enterprise is developing the right applications on the right platforms to maintain the competitive edge that they are striving for. Precious time in opportunity assessment is not wasted keeping a structure in place that provides a defaulting choice mechanism for each application. Also, the technology architecture provides a road map within each technology platform to ensure that the right tools and development options are utilized. This prevents additional time being spent extricating the application effort from previously experienced pitfalls.

But architectures aren't enough to ensure that the process and templates are used properly. Without the infrastructure mechanisms in place, the architectures, processes, standards, procedures, best practices, and guidelines fall by the wayside. We will cover in detail in the next chapter what groups are necessary and what roles they perform. With these data infrastructure mechanisms in place, the architectures have a chance of surviving the onslaught of the chaos brought about by changing priorities, strategic advantage, and just plain emergencies. We will cover the system and technology architectures with more detail in subsequent chapters, where they are more appropriately addressed.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780123851260000036

Measurements and Sustainability

Eric Rondeau , ... GĂ©rard Morel , in Green Information Technology, 2015

Traceability Matrix

The design process of green ICT architectures must guarantee the sustainability of proposed solutions. The verification step is then crucial to check the process design. Table 3.3 shows that all functions were tested, confirming that the system requirements satisfy the stakeholder requirements. The study of traceability matrix avoids delivering ill-conceived projects by reconsidering the project. A redesign step should change the initial choices in selecting new ICT products to substitute nonadapted products, to develop software patches, to dismantle certain solutions, and so on. All these modifications generate premature and useless waste, consume additional energy, and generate wobbly, less sustainable solutions.

Table 3.3. Traceability Matrix

Stakeholder Requirements System Requirements Functions Verification Method Test Case Test Result
GICT0.1 Ecology_Pillar_ICT_Performance GICT0.1.3.0 IT_Energy_Consumption Energy consumption estimation of ICT architecture Documentation TC1 OK
Energy consumption of ICT architecture life cycle Documentation TC2 OK
Real-time measure of ICT energy consumption Demonstration TC3 OK
GICT0.1.3.1 Ratio_CO2_Kwh Carbon emission estimation of ICT architecture Documentation TC1 OK
Carbon emission of ICT architecture life cycle Documentation TC2 OK
Real-time measure of ICT Carbon emission Demonstration TC3 OK
GICT0.3 Economic_Pillar_ICT_Performance GICT0.3.0 Ratio_Euro_Kwh Energy cost estimation of ICT architecture Documentation TC1 OK

In conclusion, systems engineering provides good practices for the ecodesign of complex systems, especially to green the design of ICT projects.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128013793000036

Information Security Consulting

James Kelton , in Security Consulting (Fourth Edition), 2013

Five Steps to Intruding

Because every organization has a unique technology architecture, the actual process used by the opposing team to attack information systems will vary. I have found that most attacks use one or more of the following steps to gain access to systems:

1.

Probe. Intruders often use scripts and tools to search for opportunities and vulnerabilities. One easy approach used by hackers is to wait for a vulnerability to be announced by a hardware or software vendor. For example, Microsoft may issue a security bulletin stating that a patch is needed for a Windows 2008 server that is vulnerable to information disclosure. By searching for systems that have not applied the appropriate hardware or software patch, hackers can identify potential targets. Some vendors may contribute to the problem by providing advance notification of the vulnerability even if the patch is not yet available.

2.

Exploit. Once a vulnerable system has been identified, the intruder will exploit the opportunity to gain access to the system or data.

3.

Enhance. In many cases, the intruder first obtains access to a system with lower-level access privileges. It then becomes a game for the intruder to find ways to get higher and higher privileges until the intruder has full system administrator rights.

4.

Compromise. The intruder knows that having full system administrator rights is not sufficient. With administrator rights, the intruder can change the system configuration or read, delete, or modify software applications and data.

5.

Cover tracks. Once intruders have compromised a system, they frequently cover their tracks to erase evidence. In this process, the intruder may erase log files, remove IDs, stop backup systems, and more.

What many teams don't know is that the opposing team may install a backdoor or other tool that allows access to the compromised system. The backdoor may allow the intruder access to the information system even if the team changes the passwords on all of its accounts.

Time is on the side of the opposing team. Intruders know that teams have time and budget constraints and can't always implement all of the security controls needed. In some cases, software updates, patches, and bug fixes are issued on a weekly basis. Imagine how difficult it is for an information technology (IT) department to ensure operating system and application software is routinely updated and tested on all servers and workstations.

Opposing teams know that most systems are only secured with an ID and password. With a simple and easy-to-guess user ID, all an intruder needs is a password. Unfortunately, teams make it easy for intruders, and this is why opposing teams love users:

Users like simple and easy-to-remember passwords that can be found in a dictionary which provide access to all of their data. By using a brute force approach, many passwords can be obtained within a few minutes. Once one password is compromised, an intruder gains access to multiple systems.

Users write down hard-to-remember passwords. These passwords are frequently stored near their computer system. Intruders know that by gaining physical access to the workstation, they may find passwords on Post-it Notes, under the keyboard, or somewhere else nearby.

Users do not like to change passwords. Once a password is obtained, the intruder oftentimes has weeks or months before it expires.

As a coach, I have found that only one out of every 100 organizations has hard-to-crack passwords. Stronger user education and security awareness training is needed.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780123985002000138

Introduction

Thomas Sterling , ... Maciej Brodowicz , in High Performance Computing, 2018

1.5.8 Neodigital Age and Beyond Moore's Law

The international HPC development community will extend many-core heterogeneous system technologies, architectures, system software, and programming methods from the petaflops generation to exascale in the early part of the next decade. But the semiconductor fabrication trends that have driven the exponential growth of device density and peak performance are coming to an end as feature size approaches nanoscale (approximately 5   nm). This is often referred to as the "end of Moore's law". This does not mean that system performance will also stop growing, but that the means of achieving it will rely on other innovations through alternative device technologies, architectures, and even paradigms. The exact forms these advances will take are unknown at this time, but exploratory research suggests several promising directions—some based on new ways of using refined semiconductor devices, and other complete paradigm shifts based on alternative methods. Other forms will be incremental changes to current practices benefiting from a legacy of experience and application.

While not commonly employed, the term "neodigital age" designates and describes new families of architectures that, while still building on semiconductor device technologies, go beyond the von Neumann derivative architectures that have dominated HPC throughout the last 6   decades and adopt alternative architectures to make better use of existing technologies. The von Neumann architecture emphasizes the importance of arithmetic floating-point units (FPUs) as precious resources which the remainder of the chip logic and storage is designed to support. It also enforces sequential instruction issue for execution control. Complexity of design offers many workarounds, but the fundamental principles prevail. Now FPUs are among the lowest-cost items and parallel control state is essential for scalability. New advances to current architecture and possible alternatives to von Neumann architectures may be among the innovations to extend the performance of semiconductor technologies beyond exascale.

More radical concepts are being pursued, at least for certain classes of computation. Special-purpose architectures where the logic design and dataflow communications match the algorithms can significantly accelerate computations for specific problems. Digital signal processing special-purpose chips have been employed since at least the 1970s. More recently architectures such as the Anton expand the domain of special-purpose devices to simulation of N-body problems, principally for molecular dynamics. Even more revolutionary approaches to computing are targets of research, including such techniques as quantum computing and neuromorphic architectures. Quantum computing exploits the physics of quantum mechanics to use the same circuits to perform many actions at the same time. Potentially some problems could be solved in seconds that would take conventional computers years to perform. Neuromorphic architecture is inspired by brain structures for such processes as pattern matching, searching, and machine learning. It is uncertain when such innovative concepts will achieve useful commercialization, but the future of computing systems and architecture is promising and exhibiting exciting potential.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780124201583000010

Big Picture

Milan Guenther , in Intersection, 2013

The human side of architecture

Historically, the discipline of Enterprise Architecture concentrated on shaping complex IT and technology architectures in alignment with business requirements. Today's practitioners expand their scope to address all structures that constitute an enterprise and make it work. Organizational reporting lines, information systems delivering data, processes driven by automated systems and human decisions—all these structures are just different aspects of the same system. The resulting architecture is the foundation of every single step the enterprise takes, from sending a receipt to a customer to a merger with another organization. Just as buildings are structures made for people to live in and look at, enterprises are structures made by people for people as a space for interactions and transactions. In consequence, people themselves cannot be seen as assets to be incorporated in an architectural description, only the enterprise's relationship to them. Any person in touch with an enterprise is both a user of its architecture and a contributor to it.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780123884350500044

The Enterprise Data Warehouse

Charles D. Tupper , in Data Architecture, 2011

Your Choices

Based on these review points, if you need to:

Standardize your infrastructure technology architecture

Standardize your application architecture

Develop a technology road map

Control project technology choices

Show results within 12 months from an EA program

Control scope and resource commitments carefully

Avoid formal, abstract methodologies

you should choose bottom up,

Alternatively, based on these review points, if you need to:

Focus on information and data in the enterprise

Establish a broad scope at the beginning of the EA program

Satisfy management's project funding requirements

Evaluate your business architecture

Analyze the relationships between business processes, applications, and technology

you should choose top-down.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780123851260000206

Project Management

Rick Sherman , in Business Intelligence Guidebook, 2015

Infrastructure Swim Lane

The infrastructure functional group is responsible for all the activities involving the care and feeding of the technologies used for BI, as shown in Figure 18.23. Typically, the team is composed of IT staff personnel from an enterprise's systems group. This group manages the enterprise's infrastructure such as networks, hardware (servers, PCs, tablets), storage, and databases. Nowadays, some or all of these may actually be hosted in off-premise cloud environments, in which case this group is the liaison with the outside vendor that provides the hosting solution.

FIGURE 18.23. Infrastructure functional group.

The two primary deliverables from this grouping are:

Architectures: design and implementation of the technology and product architectures

Operations: operating, monitoring, and tuning the BI environment

This group's primary interaction with the BI team is with the BI architect. It is that BI architect who will design the initial technology and product architectures. In addition, the BI architect likely handles any product evaluation efforts. The infrastructure group works with the BI architect to develop the detailed technology and product architectures that will be implemented.

After the architectures have been designed and the product selected, the infrastructure group's responsibilities include setting up the technology environments, including such deliverables as:

Acquiring appropriate products with licenses

Installation and configuration of the products

Enabling product access and their usability by the BI project team and appropriate business users

Ensuring appropriate privacy, security, and regulatory compliance

The infrastructure group is responsible for ongoing operations, including monitoring and performance tuning. This group includes the BI environment in its backups, auditing, and disaster recovery processes.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780124114616000186

The ADM Method

Philippe Desfray , Gilbert Raymond , in Modeling Enterprise Architecture with TOGAF, 2014

Phase D (technology architecture)

Unsurprisingly, the role of phase D is to establish the technological and physical correspondence of the elements developed during the previous phases. In particular, technology architecture defines the platforms and execution environments on which the applications run and the data sources are hosted for use.

So what are the links between application architecture and technology architecture? A first approach consists in considering them as two separate elements, so as to avoid any technical "intrusion" into the work of the application architect. The opposite approach would lead us to consider application architecture as a simple reformulation of the technical reality.

A position that is too dogmatic will lead to a dead end: What is the point of developing a "virtual" application architecture with no link to the reality of the deployed applications? Common sense (and purse strings) calls for more realism. Even though it must remain logical, application architecture (including its service-oriented architecture (SOA) formulation) is not completely separate from its physical translation. The most important thing here is the identification of the role of each application or component, independent of its technical implementation: the fundamental structure is similar and the viewpoint is different, just like a logical service interface, which is not fundamentally modified by its implementation in Java or via a web service.

Bearing in mind these two perspectives, a question comes to mind: Should we start by describing the technical architecture or the application architecture? This point is linked to the iterations of the ADM cycle, which will be more generally dealt with in Section 2.3. Remember that the ADM cycle is a generic framework, which does not forbid intrusions into earlier or later phases (the TOGAF document is strewn with suggestions of this type). In practice, no preestablished choices exist: this is the famous choice between "top down" and "bottom up," which always finishes with a compromise. The deployment of external tools imposes a type of architecture that can sometimes have a significant impact on application architecture solutions. In other contexts, architecture will be more oriented by architectural principles, for example to obtain a more progressive structure.

However, let's get back to the result of phase D: the technological architecture, in other words, a coherent set of software components, infrastructures, and technical platforms. These elements can come from external providers or be produced directly by teams within the enterprise. Moreover, the choice between deploying tools that are available in the marketplace or tools resulting from specific developments is a recurrent theme for an enterprise architect. Here too, the repository (see Section 4.1) will assist in this type of choice by making available a set of common norms, patterns, tools, and practices, which will help harmonize solutions within the enterprise.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780124199842000021