Tagged: banks Toggle Comment Threads | Keyboard Shortcuts

  • @fintechna 12:18 pm on May 28, 2017 Permalink | Reply
    Tags: , , banks, , , ,   

    Breaking Banks: Irish Fintech, The EU Gateway 

    This episode of focuses on the landscape of Ireland, a country establishing itself as an international hub for many startups within the industry. The episode discusses the role Ireland&;s tech community is focusing on and empowering fintech growth, the effects of the Brexit on the country&8217;s tech community, and how that growth [&;]
    Bank Innovation

  • @fintechna 7:32 am on May 25, 2017 Permalink | Reply
    Tags: banks, , , , Surpassing,   

    FinTechs Are Surpassing Banks On Cross-Border Payments 

    TransferWise had launched a Borderless Account for people and companies that do business across national baoundaries.
    Financial Technology

  • @fintechna 7:01 am on May 24, 2017 Permalink | Reply
    Tags: banks, , , , , Pendo, , Systems, , Uncovers   

    Pendo Systems Uncovers A Global Bank’s “Dark Data” At High Speed 

    can find important information in millions of pages documentation. This is often called because it has not been stored in standard database. It exists, but it has been hard to find.
    Financial Technology

  • @fintechna 12:19 am on May 21, 2017 Permalink | Reply
    Tags: , , banks, , , , , , , ,   

    Consumers Expect Innovation to Come From Banks, Not Fintechs, Not Even Facebook 

    Despite all the shiny out there, still in financial services to directly . Almost half of the consumers &;49% of women, and 45% of men&8211; said they are most excited to see new financial product launches at banks, according to a study released yesterday by EY. All other alternatives [&;]
    Bank Innovation

  • @fintechna 3:38 pm on May 20, 2017 Permalink | Reply
    Tags: , , , banks, , , economy…and, , , , , urgent,   

    Four new, urgent bank models fit for winning in the digital economy…and beyond 

    Four new, urgent bank models fit for winning in the digital economy…and beyond fintech

    As a youngster, former U.S. President Ronald Reagan couldn’t decide what type of shoes he wanted. Unwilling to wait any longer for an answer, the local cobbler ended up giving Reagan one square-toed and one round-toed shoe. Reagan later commented, “I learned right there and then that if you don’t make your own decisions, someone else will make them for you.”

    Today’s retail and commercial also find themselves in need of &;new shoes&;. Their old business model is wearing thin and is unfit for a world. It’s being eroded by inhospitable macroeconomics, new de-risking and open banking regulations, growth, consumer behavior increasingly favoring non-traditional players, and other market-specific drivers. If banks fail to make an explicit decision on evolution of their business model, other more decisive actors will decide for them.

    Archetypal bank business models fit for the digital future…and beyond
    Four new, urgent bank models fit for winning in the digital economy…and beyond fintech

    Source: Accenture

    We identified four archetypal business models that can be successful for banks:

    • Digital Relationship Manager—the first choice for most big banks that have the investment capacity to expand on their vertically-integrated business model, and serve a wide range of customer needs and segments. Appearing evolutionary, success requires radical, revolutionary change—from true physical-with-digital channel integration and real-time, hyper-relevant contextual advice to customer-driven solutions (not products) and a curated ecosystem approach where the bank can profit from the platforms of digital natives, like Google and Amazon®. It is indeed a new pair of shoes. Also, Digital Relationship Managers are more likely to evolve further to Banking as a Living Business, the industry’s next growth curve focused on relevancy and vitality banking.
    • Digital Category Killer—where banks focus on doing one thing very well to serve a narrow niche. Today’s exemplars include PayPal® in payments, Quicken Loans® in mortgages and Betterment in wealth management. Done well, the Digital Category Killer can force itself into new distribution channels (like being a provider to a Digital Relationship Manager), because it creates customer demand. Still, its success depends on other banks’ inability to do many things equally as well, and it can be difficult to diversify and look for a way to expand the single offering towards long-term growth.
    • Open Platform Player—offering a customer-centred platform through which other best-in-breed product providers can interact with customers, create and sell products and services, and share value. Our consumer survey indicates that an increasing number of customers are willing to build their own bank through such a platform. Yet as more digital time is being spent on a smaller number of multi-functional platforms, the Open Platform Player must avoid being assimilated into the broader platforms of the digital natives.
    • Utility Provider—narrowing the bank’s customer focus and value chain participation to offer end-to-end product solutions or simply a regulated entity for others to book deposits and loans in. Success means mastering the packaging and provision of compliant financial services for others, while using specialist talent and to keep overhead costs as low as possible. While the utility model can be a good, steady, non-threatening way to earn income, giving up end customers is a daunting prospect for most banks and establishing differentiation can be hard.

    Rather than ending up with mismatched shoes that limit their ability to compete, banks can decide to control their own path. They must map their strategic evolution and de-prioritize initiatives that don’t help them along that path. They must then have the focus and discipline to execute, rather than be tempted to hedge their bets and end up with mismatched shoes.

    I invite you to read the full report, Winning in the digital economy: The urgent business model choices facing retail and commercial banks. In it, we detail each archetypal business model, offer a high-level starting point for banks to take a realistic view of their fit to each one, and identify a few key execution rules for building a bank that can win in the digital economy.

    The post Four new, urgent bank models fit for winning in the digital economy…and beyond appeared first on Accenture Banking Blog.

    Accenture Banking Blog

  • @fintechna 12:19 pm on May 20, 2017 Permalink | Reply
    Tags: , , , , banks, , , ,   

    Blockchain Won’t Kill the Banks (Banks Say) 

    Decentralizing banking processes won’t actually financial service companies &; at least according to those financial service companies. In fact, decentralization may be beneficial for , depending on &;what do you mean by decentralization,&; said Jared Harwayne-Gidansky, North American lead of the emerging business and group for BNY Mellon. “Decentralization where you are innovating [&;]
    Bank Innovation

  • @fintechna 12:38 pm on May 18, 2017 Permalink | Reply
    Tags: , Alley, banks, , , , ,   

    5 Fintechs To Watch From the Startup Alley 

    In a mood for some inspiration? Take a walk down the . At the TechCrunch Disrupt 2017 event, currently taking place in New York, Bank Innovation came across a gauntlet of bright, shiny new . Here are five of our favorites: Spendwallet While major are busy building NFC-enabled digital wallets, this fintech [&;]
    Bank Innovation

  • @fintechna 7:17 am on May 18, 2017 Permalink | Reply
    Tags: banks, , kyc, risk burden,   

    Effectively dealing with regulatory and risk burden in the financial services industry 

    Effectively dealing with regulatory and risk burden in the financial services industry regulation


    It is no surprise that with ever more stringent legislation, especially in the realm of anti-money laundering and beyond, all-too-often one-size-fits-all policies and regulations are stifling growth and exponentially increasing the onus on business across sectors and industries, but ever more so in the financial services provision industry.

    Regulatory burden is regularly cited as the main problem area for and financial services providers across both sides of the Atlantic, and beyond, with 3 of the top 5 reasons all being directly interlinked with the shifting up of gears by regulatory bodies, namely , transaction monitoring and the ensuing reporting requirements.

    Equally unsurprisingly, this situation has two direct and immediate effects in the banking world: a) the gradual and relentless disappearance of community banks and smaller banking operations, with over 25% of all outfits with capitalisation of less 100 million USD disappearing over the course of the past 20 years as reported by the American Banking Association, and b) regardless of size, the increased aversion to risk by financial services provider across the board.

    While the former can be partially explained away through mitigating factors such as conglomerate mergers and turbulent market conditions over the past two decades, the latter is a consequence of the continued inability to effectively adapt and comply efficiently with legislative requirements, the demands posed by which are hardly going to be alleviated and will only see thresholds lower and the net widening.

    As clearly shown by the findings of the 2016 Thomson Reuters survey, the average cost for KYC and CDD compliance by financial firms is approx. 60 million USD, shooting up to 9 times that in a number of cases. The industry’s response to the increased demands posed is an almost disingenuously simple one: throw more resources and money at the problem and pray it sorts itself out.

    In reality, the opposite has been found to be true: onboarding times are on a steady increase, estimated to take 50% longer in 2017 than they did in 2015, with customers’ responses directly contradicting the banks’ belief that correct, timely and full ongoing information was being provided (hence putting into question the veracity and therefore validity of the exercise itself).

    Struggling to keep up with requirements at onboarding stage, it is even more worrying to note that financial services providers of all sizes and types are further unable to keep abreast, efficiently or otherwise, with the ongoing vetting and risk assessment due on past approved applicants.

    As a consequence, the industry’s inability to keep up and to manage the additional impositions has seen the appetite for exposure being directly impacted, with all the snowball effects that this has on bottom lines, the economy and the future.

    Effectively financial service operators are increasingly becoming more akin to information warehouses, and no amount of increased human resource spend will ever be sufficient to manage the volumes of data requiring processing. The increased reliance (if not total dependence) on ever growing specialised risk and fraud teams has created an inevitable bottleneck and a false sense of security that an acceptable minimum is slipping through the cracks, when the facts and figures spell otherwise.

    While financial providers are having to allocate a growing percentage of their non-interest expenses (estimated by the Federal Reserve to be around 9% in most cases, down to around 3% for outfits with asset valuations between 1-10 billion USD) to cover specialist resource costs, make up for losses incurred through miscalculated risk and fines levied for regulatory non-compliance, facts and figures squarely point that the situation is entirely untenable.

    The latest developments in the and RegTech universe however offer a clear and cost-effective solution that allows for specialised efforts to be refocused, automating a huge portion of both the new customer onboarding process as well as the maintenance and ongoing assessment of client portfolios, enabling risk and fraud efforts to be redirected where it really matters – the upper percentage of customer accounts that are to be considered of medium-high risk.

    In a world full of customer onboarding tools, data analysis software and customer screening services, the Aqubix KYC Portal stands out squarely by uniquely providing a fully tailored and customised platform through which true automation can be achieved. KYC Portal simplifies and delivers efficiency gains across the entire prices, from the initial acquisition of customers through to the automatic determination of the exposure posed according to the currently prevailing risk appetite internal to the organisation or department, the full KYC and AML compliance, irrespective of the operation’s jurisdictional requirements and the fully automated ongoing assessment of all clients.

    Connecting independently and seamlessly to any third-party service providers of choice (be they screening services, document verification providers, external data warehouses etc) and internal data sources alike, KYC Portal opens up a previously untapped realm of data management and analysis opportunities that directly impacts operational efficiencies (with improvements of over 60%, by the most conservative of estimates) through the significantly reduced time frames required to onboard new clients, the drastic reduction of touch points during the process and the delegation of the initial data collection away from the specialised risk and fraud core.

    Through a trigger and alert notification system, KYC Portal effectively sifts through new customers and automatically (based on predefined parameters reflecting the organisational procedures and practices) segment applicants based on their risk value, removing the need for intervention on the low risk or the ones beyond acceptable risk thresholds. In this manner specialist attention is refocused exclusively where it is needed – the high value but equally higher risk accounts.

    Even at extended due diligence stages, KYC Portal offers a plethora of unique tools easing, speeding up and further securing the process, not least amongst which are the in-built, plug-in free face-to-face video interview recording and storage , facial recognition and customer overview dashboard tools ensuring that human bias and limitations are totally done away with at all points in the process.

    Following onboarding, KYC Portal automatically queries all existing customer records on a continuous basis, against any number and type of external and internal data sources, to ensure that any changes in status and background of all accounts is immediately flagged and notified to the correct personnel, as are any changes in documentary validity and requirements.

    Operating on a highly notification logic, KYC Portal’s infinite customisability not only ensures that no single trigger goes unalerted, but equally that no resources are wasted on unnecessary investigations and account queries.

    Building on an infinitely scalable and modular architecture, and married to a pure risk-based logic set, KYC Portal offers a plethora of additional modules which include transaction monitoring and assessment, with automatic notifications occurring in real-time whenever preset rules and ranges are triggered on an individual basis.

    KYC Portal will be presented this June, 7th and 8th at the Harnessing FinTech Innovation in Retail Banking conference in London, where Aqubix are the event’s Lead Partner and main exhibitors. Aqubix CEO Kristoff Zammit Ciantar’s keynote speech “Automating compliance – the problem, the solution, the innovation” will open the 2-day event, where Aqubix will also be hosting 2 round tables on the operational impact of the innovation and potential offered by KYC Portal.

    is CEO of Aqubix and the author of this article

    For further information ahead of the event, or to discover how KYC Portal can help solve your organisation’s Compliance, AML and Risk problems, contact Adrian Darmanin, Chief Commercial Officer on sales@aqubix.com.

  • @fintechna 5:53 pm on May 17, 2017 Permalink | Reply
    Tags: banks, , ,   

    The Blockchain Immutability Myth 

    The Blockchain Immutability Myth Blockchain

    Where flexible thinking is preferable to dogmatism

    “The highest good, than which there is no higher, is the , and consequently it is immutably good, hence truly eternal and truly immortal.” — Saint Augustine, De natura boni, i, 405 C.E. (with minor edits)

    If you ask someone well-informed about the characteristics of blockchains, the word “immutable” will invariably appear in the response. In plain English, this word is used to denote something which can never be modified or changed. In a blockchain, it refers to the global log of transactions, which is created by consensus between the chain’s participants. The basic notion is this: once a blockchain transaction has received a sufficient level of validation, some cryptography ensures that it can never be replaced or reversed. This marks blockchains as different from regular files or databases, in which information can be edited and deleted at will. Or so the theory goes.

    In the raucous arena of blockchain debate, immutability has become a quasi-religious doctrine – a core belief that must not be shaken or questioned. And just like the doctrines in mainstream religions, members of opposing camps use immutability as a weapon of derision and ridicule. The past year has witnessed two prominent examples:

    • advocates claiming that immutability can only be achieved through decentralized economic mechanisms such as proof-of-work. From this perspective, private blockchains are laughable because they depend on the collective good behavior of a known group of validators, who clearly cannot be trusted.
    • Scorn poured on the idea of an editable (or mutable) blockchain, in which retroactive modifications can be made to the transaction history under certain conditions. Mockers posed the question: What could possibly be the point of a blockchain if its contents can easily be changed?

    For those of us on the sidelines, it’s fun to watch the mudslinging. Not least because both of these criticisms are plain wrong, and stem from a fundamental misunderstanding of the nature of immutability in blockchains (and indeed any computer system). For those short on time, here’s the bottom line:

    In blockchains, there is no such thing as perfect immutability. The real question is: What are the conditions under which a particular blockchain can and cannot be changed? And do those conditions match the problem we’re trying to solve?

    To put it another way, a blockchain’s transactions are not written into the mind of God (with apologies to Augustine above). Instead, the chain’s behavior depends on a network of corporeal computer systems, which will always be vulnerable to destruction or corruption. But before we get into the details of how, let’s proceed by recapping some basics of blockchains themselves.

    Blockchains in brief

    A blockchain runs on a set of nodes, each of which may be under the control of a separate company or organization. These nodes connect to each other in a dense peer-to-peer network, so that no individual node acts as a central point of control or failure. Each node can generate and digitally sign transactions which represent operations in some kind of ledger or database, and these transactions rapidly propagate to other nodes across the network in a gossip-like way.

    Each node independently verifies every new incoming transaction for validity, in terms of: (a) its compliance with the blockchain’s rules, (b) its digital signature and (c) any conflicts with previously seen transactions. If a transaction passes these tests, it enters that node’s local list of provisional unconfirmed transactions (the “memory pool”), and will be forwarded on to its peers. Transactions which fail are rejected outright, while others whose evaluation depends on unseen transactions are placed in a temporary holding area (the “orphan pool”).

    At periodic intervals, a new block is generated by one of the “validator” nodes on the network, containing a set of as-yet unconfirmed transactions. Every block has a unique 32-byte identifier called a “hash”, which is determined entirely by the block’s contents. Each block also includes a timestamp and a link to a previous block via its hash, creating a literal “block chain” going back to the very beginning.

    Just like transactions, blocks propagate across the network in a peer-to-peer fashion and are independently verified by each node. To be accepted by a node, a block must contain a set of valid transactions which do not conflict with each other or with those in the previous blocks linked. If a block passes this and other tests, it is added to that node’s local copy of the blockchain, and the transactions within are “confirmed”. Any transactions in the node’s memory pool or orphan pool which conflict with those in the new block are immediately discarded.

    Every chain employs some sort of strategy to ensure that blocks are generated by a plurality of its participants. This ensures that no individual or small group of nodes can seize control of the blockchain’s contents. Most public blockchains like use “proof-of-work” which allows blocks to be created by anyone on the Internet who can solve a pointless and fiendishly difficult mathematical puzzle. By contrast, in private blockchains, blocks tend to be signed by one or more permitted validators, using an appropriate scheme to prevent minority control. Our product MultiChain uses a technique called “mining diversity” which requires a minimum proportion of the permitted validators to participate in order to create a valid chain.

    Depending on the consensus mechanism used, two different validator nodes might simultaneously generate conflicting blocks, both of which point to the same previous one. When such a “fork” happens, different nodes in the network will see different blocks first, leading them to have different opinions about the chain’s recent history. These forks are automatically resolved by the blockchain software, with consensus regained once a new block arrives on one of the branches. Nodes that were on the shorter branch automatically rewind their last block and replay the two blocks on the longer one. If we’re really unlucky and both branches are extended simultaneously, the conflict will be resolved after the third block on one branch, or the one after that, and so on. In practice, the probability of a fork persisting drops exponentially as its length increases. In private chains with a limited set of validators, the likelihood can be reduced to zero after a small number of blocks.

    Nonetheless, it’s important to remember that each node is running on a computer system owned and controlled by a particular person or organization, so the blockchain cannot force it to do anything. The purpose of the chain is to help honest nodes to stay in sync, but if enough of its participants choose to change the rules, no earthly power can stop them. That’s why we need to stop asking whether a particular blockchain is truly and absolutely immutable, because the answer will always be no. Instead, we should consider the conditions under which a particular blockchain can be modified, and then check if we’re comfortable with those conditions for the use case we have in mind.

    Mutability in public chains

    Let’s return to the two examples cited in the introduction, in which the doctrine of immutability has been used as a basis for ridicule. We’ll begin with the claim that the consensual validation procedures used in permissioned blockchains cannot bring about the “true immutability” promised by public chains.

    This criticism is most easily addressed by pointing to the vulnerability of public blockchains themselves. Take, for example, the Ethereum blockchain, which suffered a devastating exploit in June 2016. Someone found a coding loophole in a smart contract called “The DAO”, in which almost $250 million had been invested, and began draining its funds at speed. While this clearly violated the intentions of the contract’s creators and investors, its terms and conditions relied on the mantra that “code is law”. Law or not, less than a month later, the Ethereum software was updated to prevent the hacker from withdrawing the cryptocurrency “earned”.

    Of course, this update could not be enforced, since every Ethereum user controls their own computer. Nonetheless, it was publicly supported by Vitalik Buterin, Ethereum’s founder, as well as many other community leaders. As a result, most users complied, and the blockchain with the new rules kept the name “Ethereum”. A minority disagreed with the change and continued the blockchain according to its original rules, earning the title “Ethereum Classic”. A more accurate choice of names might be “Ethereum compromised” and “Ethereum the pure”. Either way, democracy is democracy, and (the pragmatic and popular) “Ethereum” is now worth over ten times (the idealistic but sidelined) “Ethereum Classic”.

    Now let’s consider a less benevolent way in which public blockchain immutability can be undermined. Recall that block creation or “mining” in bitcoin and Ethereum uses a proof-of-work scheme, in which a mathematical problem must be solved in order to generate a block and claim its reward. The value of this reward inevitably turns mining into an arms race, with miners competing to solve the problems faster. To compensate, the network periodically adjusts the difficulty to maintain a constant rate of block creation, once every 10 minutes in bitcoin or 15 seconds in Ethereum.

    In the last 5 years, bitcoin’s difficulty has increased by a factor of 350,000×. Today, the vast majority of bitcoin mining takes place on expensive specialized hardware, in locations where the weather is cold and electricity is cheap. For example, $1,089 will buy you an Antminer S9, which mines blocks 10,000 times faster than any desktop computer and burns 10 times more electricity. This is all a long way from the democratic ideals with which bitcoin was created, even if it does make the blockchain extremely secure.

    Well, kind of secure. If someone wanted to undermine the immutability of the bitcoin blockchain, here’s how they would do it. First, they would install more mining capacity than the rest of the network put together, creating a so-called “51% attack”. Second, instead of openly participating in the mining process, they would mine their own “secret branch”, containing whichever transactions they approve and censoring the rest. Finally, when the desired amount of time had passed, they would anonymously broadcast their secret branch to the network. Since the attacker has more mining power than the rest of the network, their branch will contain more proof-of-work than the public one. Every bitcoin node will therefore switch over, since the rules of bitcoin state that the more difficult branch wins. Any previously confirmed transactions not in the secret branch will be reversed, and the bitcoin they spent could be sent elsewhere.

    By now, most bitcoin believers will be laughing, because I wrote “install more mining capacity than the rest of the network put together” as if this is trivial to achieve. And they have a point, because of course it’s not easy, otherwise lots of people would already have done it. You need a lot of mining equipment, and a lot of electricity to power it, both of which cost a ton of money. But here’s the inconvenient fact that most bitcoiners brush over: For the government of any mid-size country, the money required is still small change.

    Let’s estimate the cost of a 51% attack which reverses a year of bitcoin transactions. At the current bitcoin price of $1500 and reward of 15 bitcoins (including transaction fees) per 10-minute block, miners earn around $1.2 billion per year ($1500 × 15 × 6 × 24 × 365). Assuming (reasonably) that they are not losing money overall, or at least not losing much, this means that total miner expenses must also be in the same range. (I’m simplifying here by amortizing the one-time cost of purchasing mining equipment, but $400 million will buy you enough Antminer 9s to match the current bitcoin network’s mining capacity, so we’re in the right ball park.)

    Now think about the reports that bitcoin is being used by Chinese citizens to circumvent their country’s capital controls. And consider further that the Chinese government’s tax revenues are approximately $3 trillion per year. Would a non-democratic country’s government spend 0.04% of its budget to shut down a popular method for illegally taking money out of that country? I wouldn’t claim that the answer is necessarily yes. But if you think the answer is definitely no, you’re being more than a little naive. Especially considering that China reportedly employs 2 million people to police Internet content, which totals $10 billion/year if we assume a low wage of $5,000. That puts the $1.2 billion cost of reversing a year of bitcoin transactions in perspective.

    Even this analysis understates the problem, because the Chinese government could undermine the bitcoin network much more easily and cheaply. It appears that the majority of bitcoin mining takes place in China, due to low-cost hydroelectric power and other factors. Given a few tanks and platoons, China’s army could physically seize these bitcoin mining operations, and repurpose them to censor or reverse transactions. While the wider bitcoin world would undoubtedly notice, there’s nothing it could do without fundamentally altering the governance structure (and therefore nature) of bitcoin itself. What was that about censorship free money?

    None of this should be construed as a criticism of bitcoin’s design, or a prediction that a network catastrophe will actually happen. The bitcoin blockchain is a remarkable piece of engineering, perhaps even perfect for the purpose its creator(s) had in mind. And if I had to put money on it, I would bet that China and other governments probably won’t attack bitcoin in this way, because it’s not in their ultimate interest to do so. More likely, they’ll focus their wrath on its more untraceable cousins like Dash, Zcash and Monero.

    Nonetheless, the mere possibility of this form of interference puts the cryptocurrency immutability doctrine in its place. The bitcoin blockchain and its ilk are not immutable in any perfect or absolute sense. Rather, they are immutable so long as nobody big enough and rich enough decides to destroy them. Still, by relying on the economic cost of subverting the network, cryptocurrency immutability satisfies the specific needs of people who don’t want to trust governments, companies and . It may not be perfect, but it’s the best they can do.

    Rewriteable private chains

    Now let’s move on to private blockchains, designed for the needs of governments and large companies. We can begin by noting that, from the perspective of these organizations, immutability based on proof-of-work is a commerciallegaland regulatory non-starter, because it allows any (sufficiently rich) actor to anonymously attack the network. For institutions, immutability can only be grounded in the good behavior of other similar institutions, with whom they can sign a contract and sue if need be. As a bonus, private blockchains are far less costly to run, since blocks only need a simple digital signature from the nodes that approve them. So long as a majority of validator nodes are following the rules, the end result is stronger and cheaper immutability than any public cryptocurrency can offer.

    Of course, immutability is still easy to undermine if all the participants in a chain decide to do so together. Let’s imagine a private blockchain used by six hospitals to aggregate data on infections. A program in one hospital writes a large and erroneous data set to the chain, which is a source of inconvenience for the other participants. A few phone calls later, the IT departments of all the hospitals agree to “rewind” their nodes back one hour, delete the problematic data, and then allow the chain to continue as if nothing happened. If all the hospitals agree to do this, who’s going to stop them? Indeed, apart from the staff involved, who will even know that it happened? (It should be noted that some consensus algorithms like PBFT don’t provide an official mechanism for rollbacks, but this doesn’t help with governance since nodes are still free to bypass the rules.)

    Now consider a case where most of a private blockchain’s participants agree to rewind and remove some transaction, but a few withhold their consent. Since every organization’s node is under its ultimate control, nobody can force the minority to join the consensus. However, by sticking to their principles, these users will find themselves on a fork being ignored by everyone else. Like the virtuous proponents of Ethereum Classic, their place in heaven may well be assured. But back here on earth, they will be excluded from the consensus process for which the chain was deployed, and might as well give up completely. The only practical application of transactions outside the consensus is to serve as evidence in a court of law.

    With this in mind, let’s talk about the second case in which the doctrine of blockchain immutability has been used to ridicule ideas. Here, we’re referring to Accenture’s idea of using a chameleon hash to enable a block buried deep in a chain to be easily replaced. The primary motivation, as described by David Treat, is to allow an old problematic transaction to be quickly and efficiently removed. Under the scheme, if a block substitution does occur, a “scar” is left behind which all participants can see. (It should be noted that any later transactions that depend on the deleted one would need to be removed as well.)

    It’s hard to overstate how many people poured scorn on this idea when it was announced. Twitter and LinkedIn were aghast and aflutter. And I’m not just talking about the crypto crowd, which takes sporting pleasure in mocking anything related to enterprise blockchains. The idea was broadly slammed by private blockchain advocates as well.

    And yet, under the right conditions, the idea of allowing blockchains to be modified retroactively via chameleon hashes can make perfect sense. To understand why, we begin with a simple question: in this type of blockchain, who would actually have the power to replace old blocks? Clearly, it can’t be any unidentified network participant, because that would render the chain ungovernable.

    The answer is that a chameleon hash can only be used by those who hold its secret key. The key is required to enable a new version of a block, with different transactions, to be given the same chameleon hash as before. Of course, we probably don’t want centralized control in a blockchain, so we can make the scheme stronger by having multiple chameleon hashes per block, each of whose key is held by a different party. Or we might use secret sharing techniques to divide a single chameleon hash key between multiple parties. Either way, the chain can be configured so that a retroactive block substitution can only occur if a majority of key holders approve it. Is this starting to sound familiar?

    Allow me to render the parallel more explicit. Let’s say that we share control over chameleon hashes between those same validating nodes which are responsible for block creation. This means that an old block can only be replaced if a majority of validating nodes agree to do so. And yet, as we discussed earlier, any blockchain can already be retroactively modified by a majority of validating nodes, via the rewind and replay mechanism. So in terms of governance, chameleon hashes subject to a validator majority make no difference at all.

    If so, why bother with them? The answer is: performance optimization, because chameleon hashes allow old blocks to be substituted in a chain far more efficiently than before. Imagine that we need to remove a transaction from the start of a blockchain that has been running for 5 years. Perhaps this is due to the European Union’s right to be forgotten legislation, which allows individuals to have their personal data removed from companies’ records. Nodes can’t just wipe the offending transaction from their disks, because that would change the corresponding block’s hash and break a link in the chain. The next time the blockchain was scanned or shared, everything would fall apart.

    To solve this problem without chameleon hashes, nodes would have to rewrite the early block without the problematic transaction, calculate the block’s new hash, then change the hash embedded in the next block to match. But this would also affect the next block’s own hash, which must be recalculated and updated in the subsequent block, and so on all the way along the chain. While this mechanism is possible in principle, it could take hours or days to complete in a blockchain with millions of blocks and transactions. Even worse, while engaged in this process, a node may be incapable of processing new incoming network activity. So chameleon hashes provide a far more computationally efficient way to achieve the same goal. If you imagine a bad transaction as a rock buried many miles underground, chameleon hashes can teleport the rock to the surface, instead of making us dig all the way down, retrieve the rock, and fill in the hole.

    Immutability is nuanced

    By reviewing the risks of proof-of-work blockchains and the technical value of chameleon hashes, I hope to have convinced you that blockchain immutability is far more nuanced than a “yes or no” question. To quote Simon Taylor quoting Ian Grigg, the question must always be “who are you and what do you want to achieve?”

    For cryptocurrency believers who want to avoid government-issued money and the traditional banking system, it makes perfect sense to believe in a public proof-of-work blockchain, whose immutability rests on economics rather than trusted parties. Even if they must live with the possibility of a large government (or other wealthy actor) bringing down the network, they can take solace in the fact that this would be a painful and expensive operation. And no doubt they hope that cryptocurrencies will only get more secure, as their value and mining capacity continues to grow.

    On the other hand, for enterprises and other institutions that want to safely share a database across organizational boundaries, proof-of-work immutability makes no sense at all. Not only is it astoundingly expensive, but it allows any sufficiently motivated participant to anonymously seize control of the chain and censor or reverse transactions. What these users need is immutability grounded in the good behavior of a majority of identified validator nodes, backed by contracts and law.

    Finally, for most permissioned blockchain use cases, we probably don’t want validator nodes to be able to easily and cheaply substitute old blocks in the chain. As Dave Birch said at the time, “the way to correct a wrong debit is with a correct credit”, rather than pretending that the debit never took place. Nonetheless, for those cases where we do need the extra flexibility, chameleon hashes help make blockchains a practical choice.

    is CEO and Founder, Coin Sciences Ltd and this article was originally published here.

  • @fintechna 1:04 pm on May 17, 2017 Permalink | Reply
    Tags: , banks, , Comply, , ,   

    Is Your Board Ready To Comply With Cybersecurity Regulations? 

    Last year, regulators in New York decided to take matters for financial institutions into their own hands, releasing a set of rules (which went into effect in March), requiring and other FIs to establish a stricter cybersecurity program. This includes reporting all kinds of data breaches, ransomware, or phishing attacks, which could potentially harm [&;]
    Bank Innovation

compose new post
next post/next comment
previous post/previous comment
show/hide comments
go to top
go to login
show/hide help
shift + esc