Revert to Source

0
2

e816

e816

e816

e816
e816 In many organizations, once e816 the work has been done e816 to integrate a
e816 new system e816 into the mainframe, say, it e816 becomes much
e816 easier to interact e816 with that system via the e816 mainframe rather than
e816 repeat the e816 integration each time. For many e816 legacy systems with a
e816 monolithic e816 architecture this made sense, integrating e816 the
e816 same system into the e816 same monolith multiple times would e816 have been
e816 wasteful and likely e816 confusing. Over time other systems e816 begin to reach
e816 into the e816 legacy system to fetch this e816 data, with the originating
e816 integrated e816 system often “forgotten”.
e816

e816

e816
e816 Usually this leads to e816 a legacy system becoming the e816 single point
e816 of integration for e816 multiple systems, and hence also e816 becoming a key
e816 upstream data e816 source for any business processes e816 needing that data.
e816 Repeat this e816 approach a few times and e816 add in the tight coupling e816 to
e816 legacy data representations we e816 often see,
e816 for example as e816 in e816 Invasive Critical Aggregator e816 , then this can create
e816 e816 a significant challenge for legacy e816 displacement.

e816

e816
e816 By tracing sources of e816 data and integration points back e816 “beyond” the
e816 legacy estate we e816 can often “revert to source” e816 for our legacy displacement
e816 efforts. e816 This can allow us to e816 reduce dependencies on legacy
e816 early e816 on as well as providing e816 an opportunity to improve the e816 quality and
e816 timeliness of data e816 as we can bring more e816 modern integration techniques
e816 into play.
e816

e816

e816

e816

e816
e816 It is also worth e816 noting that it is increasingly e816 vital to understand the true e816 sources
e816 of data for business e816 and legal reasons such as e816 GDPR. For many organizations with
e816 e816 an extensive legacy estate it e816 is only when a failure e816 or issue arises that
e816 the e816 true source of data becomes e816 clearer.

e816

e816

e816

e816 How It Works

e816

e816
e816 As part of any e816 legacy displacement effort we need e816 to trace the originating
e816 sources e816 and sinks for key data e816 flows. Depending on how we e816 choose to slice
e816 up the e816 overall problem we may not e816 need to do this for e816 all systems and
e816 data at e816 once; although for getting a e816 sense of the overall scale e816 of the work
e816 to be e816 done it is very useful e816 to understand the main
e816 flows.
e816

e816

e816
e816 Our aim is to e816 produce some type of data e816 flow map. The actual format e816 used
e816 is less important,
e816 e816 rather the key being that e816 this discovery doesn’t just
e816 stop e816 at the legacy systems but e816 digs deeper to see the e816 underlying integration points.
e816 We see e816 many
e816 architecture diagrams while working e816 with our clients and it e816 is surprising
e816 how often they e816 seem to ignore what lies e816 behind the legacy.
e816

e816

e816
e816 There are several techniques e816 for tracing data through systems. e816 Broadly
e816 we can see these e816 as tracing the path upstream e816 or downstream. While there is
e816 e816 often data flowing both to e816 and from the underlying source e816 systems we
e816 find organizations tend e816 to think in terms only e816 of data sources. Perhaps
e816 when e816 viewed through the lenses of e816 the legacy systems this
e816 is e816 the most visible part of e816 any integration? It is not e816 uncommon to
e816 find the flow e816 of data from legacy back e816 into source systems is the
e816 e816 most poorly understood and least e816 documented part of any integration.
e816

e816

e816
e816 For upstream we often e816 start with the business processes e816 and then attempt
e816 to trace e816 the flow of data into, e816 and then back through, legacy.
e816 e816 This can be challenging, especially e816 in older systems, with many e816 different
e816 combinations of integration technologies. e816 One useful technique is to e816 use
e816 is e816 CRC cards e816 with the goal of e816 creating
e816 a dataflow diagram alongside e816 sequence diagrams for key business
e816 e816 process steps. Whichever technique we e816 use it is vital to e816 get the right
e816 people involved, e816 ideally those who originally worked e816 on the legacy systems
e816 but e816 more commonly those who now e816 support them. If these people e816 aren’t
e816 available and the knowledge e816 of how things work has e816 been lost then starting
e816 at e816 source and working downstream might e816 be more suitable.
e816

e816

e816
e816 Tracing integration downstream can e816 also be extremely useful and e816 in our
e816 experience is often e816 neglected, partly because if
e816 e816 Feature Parity e816 is in play the e816 focus tends to be only
e816 e816 on existing business processes. When e816 tracing downstream we begin with e816 an
e816 underlying integration point and e816 then try to trace through e816 to the
e816 key business capabilities e816 and processes it supports.
e816 Not e816 unlike a geologist introducing dye e816 at a possible source for e816 a
e816 river and then seeing e816 which streams and tributaries the e816 dye eventually appears in
e816 downstream.
e816 e816 This approach is especially useful e816 where knowledge about the legacy e816 integration
e816 and corresponding systems is e816 in short supply and is e816 especially useful when we are
e816 e816 creating a new component or e816 business process.
e816 When tracing downstream e816 we might discover where this e816 data
e816 comes into play without e816 first knowing the exact path e816 it
e816 takes, here you will e816 likely want to compare it e816 against the original source
e816 data e816 to verify if things have e816 been altered along the way.
e816

e816

e816 Once we understand the flow e816 of data we can then e816 see if it is possible
e816 e816 to intercept or create a e816 copy of the data at e816 source, which can then flow e816 to
e816 our new solution. Thus e816 instead of integrating to legacy e816 we create some new
e816 integration e816 to allow our new components e816 to Revert to Source.
e816 We e816 do need to make sure e816 we account for both upstream e816 and downstream flows,
e816 but these e816 don’t have to be implemented e816 together as we see in e816 the example
e816 below.
e816

e816

e816
e816 If a new integration e816 isn’t possible we can use e816 e816 Event Interception e816
e816 or similar to create e816 a copy of the data e816 flow and route that to e816 our new component,
e816 we want e816 to do that as far e816 upstream as possible to reduce e816 any
e816 dependency on existing legacy e816 behaviors.

e816

e816

e816

e816 When to Use It

e816

e816 Revert to Source is most e816 useful where we are extracting e816 a specific business
e816 capability or e816 process that relies on data e816 that is ultimately
e816 sourced from e816 an integration point “hiding behind” e816 a legacy system. It
e816 works e816 best where the data broadly e816 passes through legacy unchanged, where
e816 e816 there is little processing or e816 enrichment happening before consumption.
e816 While e816 this may sound unlikely in e816 practice we find many cases e816 where legacy is
e816 just acting e816 as a integration hub. The e816 main changes we see happening e816 to
e816 data in these situations e816 are loss of data, and e816 a reduction in timeliness of e816 data.
e816 Loss of data, since e816 fields and elements are usually e816 being filtered out
e816 simply because e816 there was no way to e816 represent them in the legacy e816 system, or
e816 because it was e816 too costly and risky to e816 make the changes needed.
e816 Reduction e816 in timeliness since many legacy e816 systems use batch jobs for e816 data import, and
e816 as discussed e816 in e816 Critical Aggregator e816 the “safe data
e816 update e816 period” is often pre-defined and e816 near impossible to change.
e816

e816

e816
e816 We can combine Revert e816 to Source with Parallel Running e816 and Reconciliation
e816 in order to e816 validate that there isn’t some e816 additional change happening to the
e816 e816 data within legacy. This is e816 a sound approach to use e816 in general but
e816 is especially e816 useful where data flows via e816 different paths to different
e816 end e816 points, but must ultimately produce e816 the same results.
e816

e816

e816
e816 There can also be e816 a powerful business case to e816 be made
e816 for using Revert e816 to Source as richer and e816 more timely data is often
e816 e816 available.
e816 It is common for e816 source systems to have been e816 upgraded or
e816 changed several times e816 with these changes effectively remaining e816 hidden
e816 behind legacy.
e816 We’ve seen e816 multiple examples where improvements to e816 the data
e816 was actually the e816 core justification for these upgrades, e816 but the benefits
e816 were never e816 fully realized since the more e816 frequent and richer updates could
e816 e816 not be made available through e816 the legacy path.
e816

e816

e816
e816 We can also use e816 this pattern where there is e816 a two way flow of e816 data with
e816 an underlying integration e816 point, although here more care e816 is needed.
e816 Any updates ultimately e816 heading to the source system e816 must first
e816 flow through the e816 legacy systems, here they may e816 trigger or update
e816 other processes. e816 Luckily it is quite possible e816 to split the upstream and
e816 e816 downstream flows. So, for example, e816 changes flowing back to a e816 source system
e816 could continue to e816 flow via legacy, while updates e816 we can take direct from
e816 e816 source.

e816

e816
e816 It is important to e816 be mindful of any cross e816 functional requirements and constraints
e816 that e816 might exist in the source e816 system, we don’t want to e816 overload that system
e816 or find e816 out it is not relaiable e816 or available enough to directly e816 provide
e816 the required data.
e816

e816

e816

e816

e816 Retail Store Example

e816

e816
e816 For one e816 retail client we were able e816 to use Revert to Source e816 to both
e816 extract e816 a new component and improve e816 existing business capabilities.
e816 e816 The client had an extensive e816 estate of shops and a e816 more recently created
e816 e816 web site for online shopping. e816 Initially the new website sourced e816 all of
e816 it’s e816 stock information from the legacy e816 system, in turn this data
e816 e816 came from a e816 warehouse inventory tracking system and e816 the shops themselves.
e816

e816

e816
e816 These integrations e816 were accomplished via overnight batch e816 jobs. For
e816 the e816 warehouse this worked fine as e816 stock only left the warehouse e816 once
e816 per day, e816 so the business could be e816 sure that the batch update e816 received each
e816 morning e816 would remain valid for approximately e816 18 hours. For the shops
e816 e816 this created a e816 problem since stock could obviously e816 leave the shops at
e816 e816 any point throughout the e816 working day.
e816

e816

e816
e816 Given this e816 constraint the website only made e816 available stock for sale that
e816 e816 was in the e816 warehouse.
e816 The analytics e816 from the site combined with e816 the shop stock
e816 e816 data received the following day e816 made clear sales were being
e816 e816 lost as a e816 result: required stock had been e816 available in a store all e816 day,
e816 but the e816 batch nature of the legacy e816 integration made this impossible to
e816 e816 take advantage of.
e816 e816

e816

e816
e816 In this e816 case a new inventory component e816 was created, initially for use e816 only
e816 by the e816 website, but with the goal e816 of becoming the new system e816 of record
e816 for e816 the organization as a whole. e816 This component integrated directly
e816 e816 with the in-store till e816 systems which were perfectly capable e816 of providing
e816 near e816 real-time updates as and when e816 sales took place. In fact e816 the business
e816 had e816 invested in a highly reliable e816 network linking their stores in e816 order
e816 to support e816 electronic payments, a network that e816 had plenty of spare capacity.
e816 e816 Warehouse stock levels e816 were initially pulled from the e816 legacy systems with
e816 e816 longer term goal of also e816 reverting this to source at e816 a later stage.
e816

e816

e816
e816 The end e816 result was a website that e816 could safely offer in-store stock
e816 e816 for both in-store e816 reservation and for sale online, e816 alongside a new inventory
e816 e816 component offering richer and e816 more timely data on stock e816 movements.
e816 By reverting e816 to source for the new e816 inventory component the organization
e816 e816 also realized they could e816 get access to much more e816 timely sales data,
e816 e816 which at that time was e816 also only updated into legacy e816 via a batch process.
e816 e816 Reference data such as e816 product lines and prices continued e816 to flow
e816 to e816 the in-store systems via the e816 mainframe, perfectly acceptable given
e816 e816 this changed only infrequently.
e816 e816

e816

e816

e816

LEAVE A REPLY

Please enter your comment!
Please enter your name here