[ { "title" : "How To Get A 360 View of Your Customer By Managing Identity", "description" : "Get inside your customers' heads by centralizing your data.", "author_name" : "Diego Poza", "author_avatar" : "https://avatars3.githubusercontent.com/u/604869?v=3&s=200", "author_url" : "https://twitter.com/diegopoza", "tags" : "identity", "url" : "/360-view-of-customer-by-managing-identity/", "keyword" : "having a 360 view of your customers might sound like just another marketing cliche designed to sell crm softwarebut the idea—that you should have a comprehensive understanding of your customers at all stages of the lifecyclefrom acquisition to referral— is essentially the holy grail of all growth marketing effortsback in the early days of apps and the interneta 360 view was easier to acquireuser identities were simply more consolidatedyou didnt have dozens of saas tools running off the cloudyour usersidentities werent dispersed across googlefacebookand every other social platform — all your usersdata was stored in your own systemyou were in chargethose days are gonethe landscape is fragmented by all different kinds of identity providersauthentication protocolsand toolswhat you have instead is a vast explosion of user data and tools for analyzing that datathats made getting a 360 view of your customers more technically complicated—but its also made it far more powerfulyour identity management system needs a single source of truththe biggest roadblock to getting a 360 view of your users is how identity is managedthe complexity begins when a user signs up for your app and it expands entropically as they use ityour different saas tools and monitoring systems collect all of this informationsend it back to their serversand then you wind up fighting with their apis and integrations to make sense of all of its why the best analytics teams begin the process of collecting data at the root—when a user first signs up for an appthe easiest way to do that is to set up a classic 11 authentication systemyou control user registration and loginyou manage their passwordsand you are the only entity that is involvedthereforeyou have total access to your usersdata and the data of any external apis you might integrate withwhat makes this tricky today is that federated identity management is such an established practiceyou want to be enabling social and enterprise login if you have an app of any sizeor a new app that you want people to feel comfortable usingfederated identity managers are simply not based on a centralized model—consolidating your customer view into one place will involve a ton of development time and efforts why we built auth0an identity provider that acts as both a classical identity provider and a federated identity managerthis allows you create the foundations on which to start building and acting upon your 360 view of your customerclassic identity managementauth0stores login credentials for every unique userprovides user information on an analytics dashboardmanages all of the users in your organizationno matter what they used to log infederated identity managementsocial loginuser and password from facebookgoogletwitteretcserve as login information for connection to the desired platform through auth0enterprise loginenterprise credentials are used to access a variety of systems and platformswhere the username and login are stored in an internal identity provider but connected through auth0this type of data collection means that you can connect user information that would otherwise be stored in completely separate places — maybe data that isnt even collected directly by your appputting the missing pieces in your user profile for a 360 view gives you what you need to customize your user interactionfrom onboarding to upsellingautomate the error out of your data centralizationwithin your own systemyou use a variety of tools to help you monitor and engage with usersmaybe you have an app analytics platform to gather behavioral dataan email management system to send personalized emailsa crmand a login platform like auth0these systems tend to rely on client-side analyticswhich makes the perfect storm for data to slip through the cracksclient-side analytics are notoriously unreliablebecause they have to be executed on the users end when they first visit your sitethey can easily be disruptedusers might exit out of a page before your code runsthey could block javascriptwith an ad blockerfor instancethey might click on a link before the page has finished loading and interrupt the loading scriptmeanwhileserver-side analytics are unwieldy and difficult to implementin order to get all of your desired server-side analytics running with your systemyoull end up running through a web of apis and writing more and more code just to get everything working smoothlys where auth0 rules come inrules are snippets of server-side javascript that run as soon as a user logs ineliminating the client-side reliability problem while being just as easy to set up and get goingwith auth0 rulesyour server-side automation always gets information from your user to your toolsfor exampleif you wanted to create a lead in salesforce the first time a user logs inyou can [use an auth0 rule to instantly send]https//githubcom/auth0/rules/blob/master/rules/creates-lead-salesforcemdthe signal to salesforce to make a new leadin factyou can connect auth0 with pretty much any platform by creating automated actions at sign in using javascriptand theyre just a few lines of code — not a forest of api integrationsyou need to know that your tools are going to get the data they need to help you understand how your customers are behavingand automating a cross-platform integration ensures that you realize the power of all of your saas platformsuse your centralized data to cater to your customersa rich360 view of your customers is a springboard for fine-tuning your product to increase customer satisfactionthe way you interact with your users from login onwards can make or break whether or not you retain those customersno matter how good your product iswhen youre unable to consolidate all the information that you know about your customersyou can easily wind up asking them the same questions twicegetting information about who your customers really are turns into a slow and error-prone processwhen you have a centralized repository of knowledge about your users and a reliable infrastructure for gathering data on themhoweveryou can put much more effective and subtle collection practices into placea great example of this is progressive profilingyou want to ask your users questions about who they are and why theyre using your productbut you know that slamming your users with a huge form as soon as they sign up kills your conversion rateswith auth0you can create a rule that triggers every time your users log inyou can ask them questions intermittently—throughout their customer journey—rather than giving them a huge survey up-frontauth0 helps you store all of this information through creating profile recordsyou can gather information from an existing identitylike a facebook profilean auth0 ruleor information the user adds to their profileeach usersprofile record will get updated as data rolls into keep all of that valuable information accessiblecapturing and centralizing log in information is a great way to make your usersexperience betterespecially when it is part of a larger customer profilewhen the user experience is goodyour customers are satisfiedand theres nothing better for business than satisfied customersto delightyou must first understandcreating a complete profile for your customer is one of the most important steps you can take to turn your usersactions into insightsthink of all the bad email marketing campaigns youve deleted from your inboxor marked as spamthink of all the irrelevant push notifications youve gottenthink of all the products youve downloaded with high hopes only to see an onboarding process that seems like it wasnt designed with anyone in mindwhen you centralize your data collection and build up the right infrastructure around it—the kind that lets everyone on your team get on the same page—then you become capable of building truly personalizeddelightful experiences across every part of your app", "image" : "https://cdn.auth0.com/blog/360-view-by-identity/managing-identity-logo.png", "date" : "March 24, 2017" } , { "title" : "Brute Forcing HS256 is Possible: The Importance of Using Strong Keys in Signing JWTs", "description" : "Cracking a JWT signed with weak keys is possible via brute force attacks. Learn how Auth0 protects against such attacks and alternative JWT signing methods provided.", "author_name" : "Prosper Otemuyiwa", "author_avatar" : "https://en.gravatar.com/avatar/1097492785caf9ffeebffeb624202d8f?s=200", "author_url" : "http://twitter.com/unicodeveloper?lang=en", "tags" : "jwt", "url" : "/brute-forcing-hs256-is-possible-the-importance-of-using-strong-keys-to-sign-jwts/", "keyword" : "json web tokens are an openindustry standard rfc 7519 method for representing claims securely between two partiesthey can be digitally signed or encrypted and there are several algorithms that can be employed in signing a jwtin this articlewell look at the two most common algorithms and discover how using weak keys can allow malicious parties to brute force the secret key from the jwtwhat is a json web tokena json web token encodes a series of claims in a json objectsome of these claims have specific meaningwhile others are left to be interpreted by the usersthese claims can be verified and trusted because it is digitally signedexamples of these claims are issuerisssubjectsubaudienceaudexpiration timeexpnot beforenbfand issued atiatjwts can be signed using a secretwith hmac algorithmor a public/private key pair using rsa or elliptic-curvestructure of a json web tokena signedcompact-serialized jwt consists of three main parts separated by anamelyheaderpayloadsignaturea jwt comes in this structureaaaaaabbbbbbcccccaaaaaaa represents the headerbbbbb represents the payload while cccccc represents the signatureheaderthe header typically consists of two partsthe type of the tokenwhich is jwtand the hashing algorithm such as hs256 or rs256example{alghs256typjwt}thenthis json is base64url encoded to form the first part of the jwtpayloadthis part of the token carries the claimsan example of a payload can be found below1234567890namejohn doemanagertrue}the payload is then base64url encoded to form the second part of the jwtsignaturethe last part of the token is the signaturethe signature is composed from the signing of the encoded headerencoded payloadand a secretan example of a signature using the hmac sha256algorithm can be created like sohmacsha256base64urlencodeheader++ base64urlencodepayloadsecreta signed jwtjwt signing algorithmsthe most common algorithms for signing jwts arehmac + sha256rsassa-pkcs1-v1_5 + sha256rs256ecdsa + p-256 + sha256es256hs256hash-based message authentication codehmacis an algorithm that combines a certain payload with a secret using a cryptographic hash function like sha-256the result is a code that can be used to verify a message only if both the generating and verifying parties know the secretin other wordshmacs allow messages to be verified through shared secretsthis is an example showcasing a hmac-based signing algorithmconst encodedheader = base64utf8jsonstringifyconst encodedpayload = base64const signature = base64`${encodedheader}${encodedpayload}`sha256const jwt = `${encodedheader}${encodedpayload}${signature}`an example of signing a jwt with the hs256 algorithm using the jsonwebtoken javascript library can be found belowvar jwt = requirejsonwebtokenconst payload = { subtrue}const secretkey =const token = jwtsignsecretkey{ algorithmexpiresin10m// if ommitedthe token will not expire}rs256rsa is a public-key algorithmpublic-key algorithms generate split keysone public key and one private keyfor public-key signing algorithmsrsassaprivatekeywhen signing and verifying jwts signed with rs256you deal with a public/private key pair rather than a shared secretthere are many ways to create rsa keysopenssl is one of the most popular libraries for key creation and management# generate a private keyopenssl genpkey -algorithm rsa -out private_keypem -pkeyopt rsa_keygen_bits2048# derive the public key from the private keyopenssl rsa -pubout -in private_keypem -out public_keypemboth pem files are simple text filestheir contents can be copied and pasted into your javascript source files and passed to the jsonwebtoken library// you can get this from private_keypem aboveconst privatersakey = `<your-private-rsa-key>`const signed = jwtprivatersakey{ algorithm5s}// you can get this from public_keyconst publicrsakey = `<your-public-rsa-key>const decoded = jwtverifysignedpublicrsakey{ // never forget to make this explicit to prevent // signature stripping attacksalgorithms[]es256ecdsa algorithms also make use of public keyswe can use openssl to generate the key as well# generate a private keyprime256v1 is the name of the parameters used# to generate the keythis is the same as p-256 in the jwa specopenssl ecparam -name prime256v1 -genkey -noout -out ecdsa_private_keypem# derive the public key from the private keyopenssl ec -in ecdsa_private_keypem -pubout -out ecdsa_public_keypemif you open these files you will note that there is much less data in themthis is one of the benefits of ecdsa over rsathe generated files are in pem format as wellso simply pasting them in your source will sufficeconst privateecdsakey = `<your-private-ecdsa-key>privateecdsakeyconst publicecdsakey = `<your-public-ecdsa-key>publicecdsakeynotethese algorithm notes above are excerpts from the very comprehensive auth0 jwt book written by sebastian peyrottdownload it for more information on signing and validating jwts using these algorithms mentioned abovebrute forcing a hs256 json web tokenas secure as hs256 isespecially when implemented the right waybrute-forcing a json web token signed with small and medium sized shared-secrets using hs256 is still very possiblerecentlyi came across a tool written in c on githubit is a multi-threaded jwt brute force crackerwith a huge computing powerthis tool can find the secret key of a hs256 json web tokenplease note the rfc7518 standard states thata key of the same size as the hash outputfor instance256 bits foror larger must be used with this algorithmauth0 secret keys exceed this requirement making cracking via this or similar tools all but impossibleimplementing a brute force attacki used a mac computer to try out the brute force attackfirstmake sure you have openssl installedif it is notinstall it with homebrew like sobrew install opensslthen run this command in the terminal like somake openssl=/usr/local/opt/openssl/include openssl_lib=-l/usr/local/opt/openssl/libon ubuntuyou can install openssl like soapt-get install libssl-devthe specs of my macbook are mentioned belowprocessor 27 ghz intel core i5memory 8gb 1867 mhz ddr3graphics intel iris graphics 6100 1536 mbgo ahead and clone the jwt-cracker from githuban example jwt signed with hs256 and a secretsn1f iseyjhbgcioijiuzi1niisinr5cci6ikpxvcj9eyjzdwiioiixmjm0nty3odkwiiwibmftzsi6ikpvag4grg9liiwiywrtaw4ionrydwv9caoiaifu3fykvhkhpbuhbvth807-z2ri1fs3vx1xmjenowrun the jwt-cracker from your terminal to crack the token like sotime/jwtcrack eyjhbgcioijiuzi1niisinr5cci6ikpxvcj9caoiaifu3fykvhkhpbuhbvth807-z2ri1fs3vx1xmjenotemake sure the jwtcrack script is executable by running chmod a+x/jwtcrackit took about 616s on my laptop to crack the secret keywith the help of jwtiolets sign another token quicklybut with a secretrun the cracker again with the new jwt like sotjva95orm7e2cbab30rmhrhdcefxjoyzgefonfh7hgqcrack another tokenfrom the results shown aboveit cracked the token and got our secretwhich is actually secret in about 327351ssecurity concerns and recommendationlets take another look at the keys we used to generate the tokens that were cracked easilywhat are the key sizesthe first keysn1f is 32-bit1 character = 8 bitsthe second keysecret is 48-bitthis is simply too short to be a valid keyin factthe json web algorithms rfc 7518 states that a key of the same size as the hash outputor larger must be used with the hs256 algorithmi therefore recommend that anyone trying to generate a json web token and signing them with hs256 to use a properly sized secret keyauth0 secret keys are 512 bits in length and not susceptible to this type of brute force attackadditionallyauth0 allows you to easily sign your jwts with rs256using auth0 to sign jwt with rs256with auth0you can easily generate jwts for authentication and authorizationby defaultwe use hs256 to sign the jwts generatedbut we also allow customs to use rs256 if their use case calls for itthe auth0 lock library returns a signed jwt that you can store on the client side and use for future requests to your apisin the vast majority of use cases you would never need to change the signing algorithmbut on the off chance that you dos see how to accomplish it with auth0create a client on the dashboard like socreate a clientgo to settings like sosettings pagescroll down to show advanced settings like soshow advanced settingsswitching to rs256 is as easy as selecting the option from the dropdown on the auth0 dashboard like sodefault is hs256switching to rs256 is simpleconclusionjson web tokensjwtsare lightweight and can easily be used across platforms and languagesthey are a clever way to pass signed or encrypted information between applicationsthere are several jwt libraries available for signing and verifying the tokenswe have also been able to show that brute forcing of hs256 jwts is certainly possiblewhen used with short and weak secret keysunfortunatelythis is a limitation of most shared-key approachesall cryptographic constructionsincluding hs256are insecure if used with short keysso ensure that implementations satisfy the standardized requirementsas a rule of thumbmake sure to pick a shared-key as long as the length of the hashfor hs256 that would be a 256-bit keyor 32 bytesminimumluckilyif you are an auth0 customer you have nothing to worry about as we follow all the standards and best practices when generating secret keys", "image" : "https://cdn.auth0.com/blog/jwtalgos/logo.png", "date" : "March 23, 2017" } , { "title" : "How to Manage JavaScript Fatigue", "description" : "Many developers are overwhelmed by the rapidly expanding ecosystem of modern JavaScript. Learn how to manage and mitigate JS fatigue.", "author_name" : "Kim Maida", "author_avatar" : "https://en.gravatar.com/userimage/20807150/4c9e5bd34750ec1dcedd71cb40b4a9ba.png", "author_url" : "https://twitter.com/KimMaida", "tags" : "javascript", "url" : "/how-to-manage-javascript-fatigue/", "keyword" : "tldrmost javascript developers have heard of or experienced javascript fatiguejs fatigue is the overwhelming sense that we need to learn most of the hottest emerging technologies in order to do our jobs wellthis is unattainable and the stress we feel to achieve it is unjustifiedso how do we manage and combat javascript fatigueview largerwhat is javascript fatigueputting satire asidejavascript fatigue is on a lot of developerstongues and blogs recentlyand with valid reasonbut what does it mean to have js fatigueits often mentioned when someone hears about a new libraryframeworkdependency managerbuild tooletclets do a quick breakdown of what js fatigue meansjs fatigue vsanalysis paralysisjs fatigue is often linked with analysis paralysisalso called choice paralysisjs analysis paralysis can occur because of the huge range of options when selecting a frameworktoolingtesting suitesand more for a new applicationchoosing the right framework or library can be challenging and occasionally even paralyzingbut having a wealth of tools at our disposal allows us to be more selective about whats best for the job at handin some casesoptions help us to avoid fatigue by supplying an ideal solution for a specific projectwhat it means to have js fatiguewe get js fatigue when the requirementseither actual or self-imposedfor learning something are so daunting that a developer becomes exhausted and overwhelmedjs fatigue can refer tothe fear that well fall behind or become obsolete if we dont know and use the newesthottest toolsthe sense that we never become experts in anything because everything changes too quickly and the tools were trying to learn are already being replacedpicking up a new framework and then becoming overwhelmed thinking we need to master everything in the toolchain in order to use itll pick a tool that will get displacedresulting in a lack of support and obsolete resourcesfrustration with a lack of user empathy when consulting documentation or resources while trying to learn a new framework or toolchainjavascripts astonishing growth ratein a nutshelljs fatigue has become a phenomenon because the js landscape is ever-changingvarious build toolstranspilersand syntax additions are considered par for the coursethere are even dozens of entire languages that compile to jsthe exponential and unslowed growth of js has opened the doors for great toolslanguagesand frameworkshoweverthis also promotes change so rapid it can make any developers head spinthe deep roots of javascript fatigue delves into the history of js and its swift evolution over a short amount of times a great read and highly recommendedonly a few years agomost programmers considered front-end development to consist primarily of htmlcssand ui-enhancing javascriptsuch as jquerysince thenjs alone has proliferated into isomorphic jsfunctional reactive programming in jsframeworkslibrariesbuild toolspackage managersand much morewe used to refer tofront-endwholistically as all client-side developmentbut js has evolved to support a specializationjavascript developershow to manage js fatigueat this timejs proliferation is not showing signs of slowingjs fatigue doesnt have a magic bullet curebut there are things we can do to manage and also mitigate itthe following tips are useful for overwhelmed new developers as well as experiencedfatigued js engineerspick your battlesthe first thing to focus on is picking your battleswe get easily overwhelmed trying to follow every hot new thing that emergess good to be aware of the sea of new technologiesbut not to drown in itif something starts to come up a lotread a little about ityoull want to know just enough to answer the followingwhat is its primary purposeis it popular enough to have a stablegrowing community and easily accessible supportwho is behind itwho is using itdoes it solve a problem i frequently run into with my current toolsif #1 isnt practical for your use case and the answers to #2 and #3 are not both yesdont expend precious time and effort learning this if youre already fatiguedit can be best to wait and seeor to take a pass on tools that dont serve your goalsmake peace with your focus and remember that no js developer is an expert in every new tool that landspick your battlestweet this in factit can make us better developers to know when to be okay with not learning some new toolyou may have heard the expressionjack of all tradesmaster of nonewhich implies superficial knowledge in many things but expertise in none of themremember that youre not obligated to learn everything and you can excel at your craft without jumping on every bandwagon that rolls up to the curbon the other handif a tool has gained critical mass and will help you solve a problem youre havings worth further explorationt feel like you have to commit to learning it right awaybut it might help to find out a little more and keep an eye on itif a tool has gained critical mass and helps you solve a problems worth explorationtweet this make something interesting / usefulfor many developersthere are two primary ways we learn something new in a short amount of timewe need to learn it in order to complete a project with predefined requirements and a deadlineorwe build something on our own that were interested inlearning anything can be arduous and tedious if we dont have a clear view of the end result or real-world practicalityon the same tokenlearning is much more gratifying when were building something interesting or usefulone good way to learn new tools and frameworks is to make the same thingsomething useful that you likeusing different toolsthis shouldnt be the ubiquitious and tiresome todo appit should be something that covers many common features of modern applicationssuch asroutingglobal header and footercss framework integrationresponsive ui andstylesglobal application dataexternal apiservices and utilitiesthis has several advantages for learningfamiliarityknowing what youre trying to achieve and not making it up as you go along makes development more straightfowardcomparisonrebuilding something reveals similarities and differences as well as highlights strengths and weaknesses between frameworksiterationyou may find that each time you go through this exerciseyou see things you can refine and improves important to maintain a high level of interest and/or usefulness with your learning appcreating a robust starter project can help you quickly learn the ins and outs of setup and common features while providing a practical beginning point for future apps you buildbe aware of common conceptseven as js grows and changesthere are always concepts shared amongst many new frameworks and librariess useful to keep an eye out for these tools and topicsfor examplees6 and typescript are becoming more heavily usedas well as webpackfunctional reactive programmingand web componentsknowing about common dependencies makes different frameworks feel more similarwhen you take the plunge with a new framework and toolchainll learn some of the common topicsll be pleased to find that other modern frameworks leverage many of the same tools and concepts and are now much easier to pick uplearn iterativelymany developers are fatigued by the fact that new frameworks have so many complex dependencies that it takes weeks to set up a new appt be afraid to use tools if they helpif a cli is available or the community offers starter projectstake advantage of themgetting off the ground quickly allows you to focus on learning the core features and avoid getting discouraged by difficult setups ideal to know how and why something works so its less magicalbut that doesnt mean you need to frontload that knowledge before getting startedt worry if you find yourself picking it up along the waywhen you hit a roadblockwork through itremember to take breaks if you get frustratedlearn as you gois a legitimateeffective method for absorbing lots of new information over timeonce youve done the basicsthe how and why reveal themselveseither inah-hamoments or gradually with repeated useeffective way to absorb lots of new information over timetweet this asideuse auth0 for authentication in js appstaking advantage of knowledgetoolsand solutions from others is extremely valuable when combating js fatigueauthentication can be one of the most complextime-consuming features to build for any applicationdevelopers who are already learning a new toolchainlibraryor framework can become even more overwhelmed building securecomplicated features like authenticationif you have a js app that needs authenticationauth0 can bear the load for any frameworkauth0s single page application quickstart guides and auth0 sdk for web provide indepth documentation for robust identity and user management in js appsauth0 makes authentication straightforwardgreatly reducing fatigue and cognitive burden for busy developerst be hesitant to utilize the proper tools and services that will help you do your job and do it wellwere much less fatigued by new things when we have help completing difficult tasksif authentication is one of your primary needsyou can learn more in the auth0 docs or sign up for a free account hereconclusionwell finish with an analogyif js is the world around youthere are a few ways to view and take it inif you look at the js world through a telescopeyou can see one thing very clearlyre essentially blind to everything else around yous important to be comfortable with your focusbut not to the point that you shut out awareness of any other possibilitiesif you view the world as a panorama through a wide-angle lensyou get a vastcomprehensive picture but its hard to know where to lookeverything in the scene competes for your attention and you can get easily distracted when trying to focus if something else catches your eyethis can be exhaustingnow consider a normal pair of glassesyou can see everything more clearlybut still focus your attention on oneor a fewthings without losing sight of whats in your peripherywhen viewing the modern javascript landscapeglasses are a good approacht feel like you have to take in all your surroundings at oncebut dont blind yourself to the larger world eitherfocus your time and effort on whats in front of you while surveying occasionally for potential improvementshopefully youll find yourself feeling more refreshed and enthusiastic about the great things javascript has in store for the future", "image" : "https://cdn.auth0.com/blog/js-fatigue/JSLogo.png", "date" : "March 22, 2017" } , { "title" : "Why Identity Matters for Innovation Labs", "description" : "Learn about identity innovations, and why identity matters for innovation labs.", "author_name" : "Kim Maida", "author_avatar" : "https://en.gravatar.com/userimage/20807150/4c9e5bd34750ec1dcedd71cb40b4a9ba.png", "author_url" : "https://twitter.com/KimMaida", "tags" : "innovation-labs", "url" : "/why-identity-matters-for-innovation-labs/", "keyword" : "tldrinnovation drives technology and innovation labs are critical for propelling the technology industry forwardfind out about innovative accomplishments with identityand why identity matters for innovation labswhat are innovation labsinnovation labs are initiatives designed to promote and grow innovative thinkingservicesand productsthere are many ways that businesses approach the promotion of innovationincluding forming strategic units or simply providing avenues for workers to network and collaboratesome examples of corporate innovation labs include research at googlemicrosoft research laboracle labsilab at harvardand moreinnovation labs may have several aimsincluding but not limited toideationcultivating ideas and collaboration through hackathonsinternal proposalsdemosattracting talentcompanies that stay on the cusp of cutting-edge technology and foster a culture of innovation are more attractive to skilledprospective employeeslong-term revenuenew productsand other offerings can have substantial long-term returnsin additionbringing more ideation and production in-house reduces long-term costsidentity innovationinnovations in identity technology have led us to more secure and more dynamic tools and practiceslets examine a few examplesfrictionless authentication with passwordlessthe evolution of passwordless authentication has allowed users to leave memorizing complicated passwords behind while enjoying improved securitycompanies like slack and medium utilize passwordless authentication to increase engagement and enhance securityfingerprint authentication in mobile apps is also a form of passwordless login that greatly improves the user experiencein factpasswordless biometric security is steadily gaining traction with identity innovations such as the biometric wallet from hammacher schlemmer and biometric credit card technologypersonalization with identity datadata collected from user identity is valuable for increasing roi by providing a personalized experience to usersretailers make use of consumer data by providing product suggestionspersonalizing adsand sending personalized emailsfor examplequbit leverages analytics and visitor behavior to tailor e-commerce experiences to each userutilizing identity information like the users location along with data like purchase historypreferencescheckout speedand more enables highly focused personalization and targetingabout 60% of consumers agree thatretailers who personalize their shopping experience make it easier to find the products that are most interestingusing identity data to improve securityinnovations with identity data have led to greatly enhanced securityanomaly detection is the identification of unusual data points in a setif a user normally logs into an application from new york cityusaa login with the same credentials from brisbaneaustralia is an outlier and could potentially indicate malicious activityidentity data can be used to detect a multitude of different anomaliesincluding locationtimerepeated failed login attemptsauthentication from new devicesand much morewhen an identity data anomaly has been detectedsteps can be taken to mitigate riskthe user can be notified of potentially malicious activity and asked to verify their identitytheir account can be frozen in the case of a brute force attackor their credentials can be reset such as in the case of breached passwordswhy identity matters for innovation labsswift developmentflexibilityand focus on ideation and growth are vital for innovation labswhen producing an innovative product or applicationits important for innovators to be able to hone their craft and not have to expend hundreds of hours worrying about managing identity and securing authenticationauth0 is an identity-as-a-serviceidaassolution that works with any application built on any frameworkauth0 also supports a multitude of identity providersincluding popular social platforms like twitter and facebook as well as enterprise providers such as active directoryintegration is swift and easyand can be implemented in just a few minutesim a big proponent of letting experts do what they do besti didnt want to rely on building [an identity management solution] ourselves— david bernickharvard medical schoolinnovation labs can reap numerous benefits from utilizing auth0 as an identity management solutionincludingreduction in engineering time and costsauth0 enables easy integration with any application on any frameworkbe it nodejsphpnetswiftjavaangularetcextensive documentation and sdks are available to jumpstart development and reduce the engineering time and effort needed to implement a robust authentication solutionauth0 integrates with dozens of social and enterprise providers at the flip of a switchand also supports single sign-on with minimal codingflexibility and extensibilityauth0s login functionality and fine-grained authorization permissions are easily customized with rulesnumerous extensions are also available for additional features such as logginggithub deploymentsmetadata can be added to users manually or via ruleseven the auth0 login widgetcalled lockcan either be used out-of-the-boxor replaced with a fully customized user interfaceincreased securityauth0 supplies multifactor authentication and passwordless at the click of a buttonthe solution also provides highly customizable anomaly detection in addition to breached password and brute force protectionauth0 is peer reviewed by international security experts and complies with standards such as samloauthws-federationand certifications like openid connectsoc2hipaainnovate without worrying about identityrapidfire developmentexcellent flexibilityand reduced time to market are vital for innovation to blossom in the technology industryauth0 provides services that enable these fundamental tenets of tech innovationwith auth0innovation labs can focus on producing groundbreaking offerings without worrying about authentication security and identity management", "image" : "https://cdn.auth0.com/blog/innovation-labs/innovation-lab-logo.png", "date" : "March 21, 2017" } , { "title" : "Anomaly Detection: Safer Login with ThisData and Auth0", "description" : "Learn how to detect authentication anomalies with ThisData to improve login security.", "author_name" : "Nick Malcolm", "author_avatar" : "https://cdn.auth0.com/blog/thisdata/nickmalcolm.jpeg", "author_url" : "https://twitter.com/nickmalcolm", "tags" : "ThisData", "url" : "/anomaly-detection-safer-login-with-thisdata-and-auth0/", "keyword" : "guest post by nick malcolmcto at thisdataanomaly detection is the process of identifying events which are out of place or unusualdetecting anomalies in web applications can reveal signs of malicious activity or hackersand responding to those anomalies automatically helps keep our users safea common example is the email you might have gotten from google or facebook when you log in from a new computer or locationyou usually log in using a macbook from your beachfront office in fijiand now youre logging in from siberia using linuxthe “was this you” email is a result of anomaly detectionand in this post were going to supercharge your auth0 login process with thisdatas login intelligence to achieve the same resultsthisdata gives you real-time detection of account takeover for web and mobile appsit identifies users based on context and notifies you or your users immediately if an account has been breachedjust like googlein a previous guest post on the thisdata blog we learned how to use thisdatas anomaly detection rules to stop attackers from logging in to your users accounts via auth0in this postyoull learn how to implement account takeover detection via thisdata in your auth0 app in just 6 simple stepslets get started1sign up for a thisdata accountbrowse to thisdatacom and create a free 30 day trial accountas shown below2get your api keyin the first step of thisdatas quickstart is your api keyplease make note of itas you will be needing it later3set up an auth0 appin the auth0 dashboardcreate a new client and choose single page web applicationas shown in the following screenshotonce youve created a clienthead over to the settings section of the dashboard and take note of your domainclient idand client secret as shown belowclone this sample app from githubopen up auth0-variablesjsand add your auth0 credentials like sovar auth0_client_id=xxxxxxxxxxxvar auth0_domain=xxxxxxxauth0comvar auth0_callback_url=locationhref4integrate thisdatain the auth0 dashboardclick on the rules section in the main navigationthen click on thecreate rulebutton located at the top right of the pageaof available rule templates will be presented to you as shown in the diagram belowchoose the “account takeover detection via thisdata” rulethis rule is designed to detect phished or compromised user accountseven if the primary user authentication is approvedit will deny access to a user if the login appears to be highly suspiciousit relies on thisdata anomaly detection algorithms which take into account factors likedevicestime of the daytor usagelocation &velocityrisky ip addressesmachine learningand much morethisdata has a risk score that is attached to every login eventa higher risk score indicates a more significant anomaly was detectedif the risk is highthe user can still log inbut we can also send a notification to their email address to verify it was really themtheaccount takeover prevention via thisdatarule will block a login attempt if the risk is too highafter clicking on the rulethe rule editor will show uphereyou can see the code that integrates thisdata with your login processits nice and simple—it pushes some metadata to thisdatas api when your user logs inget your thisdata api key and paste it in the settings sectionthe rule will have access to it as an environment variable5turn on notificationsturning on notifications is optionalbut awesomenotifications help your users take action when their account is attackedhead over to your thisdata account and browse to api settingsin the sidebarclick user notificationsclick the checkbox next to send email to turn on end user notificationsyou can also upload your company logo hereor enable slack notifications by clicking integrations in the sidebar6run &test your appnow lets run our sample auth0 app and see how it all workson a mac you can run the app by typing python -m simplehttpserver 8000 in the command lineopen up your browser and run the app like solog into your applicationand then head over to the thisdata websiteyou will see the recorded login event with an associated risk scoreas followsif there is irregular activity like a sudden change in device or locationaccessing the website at an unusual timeusing toror other anomaliesthen your user will receive an email like thisand your slack channel might look like thisin the example abovethe user was immediately notified of suspicious access to their accountthey then responded by clickingno it wasnt [me]in the emailthe initial alert and the response are also visible to your ops team in slackyou can configure thisdata to take automated action too—learn how by readingcreate a security workflow with alert webhooksin thisdatas documentationconclusionit is super simple to integrate thisdata into your authentication process when building an app that uses auth0thisdata allows you to detect login anomalies to better protect your users and your app from cyber-criminalscyber-attacks are on the riseso taking these simple security precautions helps ensure that your users and apps are safemake your applications more secure today with thisdata and auth0", "image" : "https://cdn.auth0.com/blog/thisdata/ThisData-logo.png", "date" : "March 20, 2017" } , { "title" : "Analyzing Identity in Movies", "description" : "As technology becomes more advanced, movies are predictors of how our identity will be utilized.", "author_name" : "Martin Gontovnikas", "author_avatar" : "https://www.gravatar.com/avatar/df6c864847fba9687d962cb80b482764??s=60", "author_url" : "http://twitter.com/mgonto", "tags" : "identity", "url" : "/analyzing-identity-in-movies/", "keyword" : "tldras technology becomes more advancedmovies are predictors of how our identity will be utilizedour relationship with technology is constantly evolvingwhether its an app to help manage our daily activities or the latest tech devicetechnology is changing on a day-to-day basis to adapt to our needsthese advances are making our lives easier and more productiveto that endtheres no shortage of movies that explore the role technology will play in the futurebut when we take a minute to separate the fact from fictionwhat are we left withwhat do these movies tell us about whats to comeeven more intriguingwhat do these movies tell us about how our identities will mold the futurehow will our identities become even more important than they are todayto answer these questionswe examined three movies that explore identity in the future and examined how their assumptions might be closer to reality than we thinklets take a closer lookminority reportidentity and data collectionthe 2002 film minority report paints an intriguing picture of america in the 2050sa system using a combination of biometric surveillance and psychics called “precogs” has been developed to apprehend potential murderersas a result of capturing criminals before they actthe crime rate plummets to zerothe film focuses on captain john andertona precrime police officeras he attempts to outrun the law after the precogs predict hell murder someonehis attempts to elude the authorities is hampered by the fact that public spaces are brimming with retinal scannershis only option to evade capture and hide his identity is to have his eyes surgically replaced by an underground doctorwhat makes the use of identity in this film interesting is the use of biometric data collection to generate holographic ads for individual retail customerswhen anderton enters a gap storehes greeted as “mrnakamura”who we can assume was the former owner of the transplanted eyessourcethis extensive use of ones identity in the future will give retailers the opportunity to provide highly sophisticated and personalized advertising experiencesonly ads that cater to you as an individual—rather than to your demographic—would be presentedin the case of minority reportpredictive analysis based on historical purchases is used to enhance marketing and advertising strategies by singling out customershow do we know identity will be used this way in the futureits already happeningfor gmail usersinboxes have become a hotspot for adsbased on the sites users have visited and the types of emails receivedinbox ads become increasingly more targetedsourcegone are the days of zero targetingwhen everyone was served the same adscustomers had no choice but to listen because there were only a handful of tv stationsnewspapersand magazines to choose fromif minority report is any indication of what we can expectads of the future will be more important for the simple fact that theyll be even more personalized and targeted to individualsmarketers can focus on specific categories and not just general characteristics of a much broader target audienceheridentity and personalized experiencesin the 2013 film hertheodore twombly is a lonelyrecent divorceein an attempt to soothe his growing depressionhe purchases an artificial intelligence personal assistantbecause of its ability to adapt based on interactionsit gives itself the name “samantha” and he falls in love with itor ratherhe falls in love with the illusion of the type of relationship he desperately needsthe film explores how this growing relationship provides him with the support and understanding hes been unable to achieve with other peoplewhile the film takes place in the futurewere shown how technology can blend seamlessly into our lives and adapt to our preferencessourcealthough the concept of developing an emotional relationship with technology seems far-fetchedwe already see hints of a deepening bond as technology becomes increasingly more integrated with our day-to-day livess google homeamazons alexaand apples sirias the internet of things takes offre going to see more of this type of integration of technology into our everyday livesand well probably also see more parents using it to troll their kidsthis technology is different from anything thats preceded it because its designed to understand our identity and utilize that understanding to reduce our need to go out and find information ourselvesgoogle has taken this concept and created products that enable us to seek and find the information we need fasterthis enhanced user experience means that were getting information before we even search for ittake google now for instanceit provides users with recommendations based on usagetechnology is moving in this direction as it becomes less of a focal point in our lives and begins to recede into the backgroundwith google homeusers can simply verbalize what theyre looking forthereby reducing any friction that may prevent them from satisfying their immediate needsgoogle home optimizes user experience by relying on their identity and specific preferencessourcesimilar to the movie herusers can interact with google home to give the illusion of a two-way conversationwith this shiftidentity becomes more important as our technology learns more about us than anyone elsethereby reducing our need to go out and find information for ourselvesgattacaidentity and biometric identificationin the movie gattacagenetic engineering and state-sanctioned eugenics are used to determine a persons propensity to excel or failthis approach divides society into two groupsessentially the “haves”validsand the “haves-not”in-validsvincent freeman is deemed an in-valid because hes conceived naturally without the use of genetic selectiondespite his genetic “inferiority”s determined to become an astronautin order to mask his true geneticshe uses the identitybloodskinhaireyelashesurineand salivaof a paralyzed man—a valid—by falsifying a finger-prick blood test to gain access to opportunities that would typically be out of reachsourcedespite the obvious physical differences between vincent and jeromere shown that genetics take precedencein other wordsidentity is authenticated by biometricssourcethough gattaca was based on fictions now an increased reliance on the use of biometric identification to determine what access people should havemost cell phones come equipped with face recognition software or fingerprint encryption to unlock thema sufficiently motivated hacker is going to be able to break biometrics because since its not based on any inherent aspect of your body but rather on a digitized version of your identityit can be misusedyour fingerprint is really being stored as a digital imagesourceevery time you put your finger on the sensors producing a different imagebecause you always put your finger on a different part of the sensorthe digital image needs to be flexible enough to account for that factfortunatelywe keep our customers up to date on the latest changes in cybersecuritythereby making sure their identity is always safeas gattaca demonstratesas we move into a world that relies more on biometric identitywe have to understand that we cant be so stringent with what determines personal identity and resilienceif not managed properlyauthentication procedures have the potential to create biased class systems or categories“the future is already here”as we continue to see advancements in technologywe should prepare ourselves for even more impressive leaps forwardwith more access to our identities and with the development of progressive systemsthe seeds of advancement have been sownas our daily interaction with technology growss important to remember that the need for identity metrics will also increaseallowing companies to learn more about us enables more personalized experienceswith less effort on our partservice providers and retailers have clear insight into our needs and wantsthisin turnallows for more technology to integrate into our livestechnology then acts more as a companionrather than simple tools to do our biddingalgorithms will give our technology more indication into who we truly are at our coreto quote science fiction writer william gibson“the future is already here”", "image" : "https://cdn.auth0.com/blog/identity-in-movies/logo.png", "date" : "March 17, 2017" } , { "title" : "Web Components: How To Craft Your Own Custom Components", "description" : "Learn how to make web components and leverage them in your applications today.", "author_name" : "Prosper Otemuyiwa", "author_avatar" : "https://en.gravatar.com/avatar/1097492785caf9ffeebffeb624202d8f?s=200", "author_url" : "https://twitter.com/unicodeveloper", "tags" : "polymer", "url" : "/web-components-how-to-craft-your-own-custom-components/", "keyword" : "tldr the introduction of web components have given developers super powerswith web componentsweb designers and developers are no longer limited to the existing html tags that existing browser vendors providedevelopers now have the ability to create newhtml tagsenhance existing html tags or extend components that developers around the world have createdin this articleill show you how to use and createweb components for your appsweb components allow for reusability and the ability to associate js behaviour with your markupdevelopers can search for existing components created by other developers on the web components registryin the absence of suitable existingelementsdevelopers can create theirs and make it available for others by publishing it to the registrywhat are web componentsweb components are a set of web platform apis that allow you to create newreusableencapsulated html tags to use in web pages and web appsthey are reusable widgets that are built on the web component standardsweb components work across modern browsers and can be used with any javascript library or framework that utilizes htmlthere are some sets of rules and specifications that you need to follow to develop web componentsthese specifications are classified into four categorieselementsshadow domhtml importshtml templatewell talk about these specifications in the latter part of this postbut lets quickly learn how to use web componentshow to use web componentsthe first step is to browse the element registrycheck for the components that you are interested inthen go through the readme to know how to import it and use in your web applicationsthe web component registry has two main sectionsthese areelements in the registrycollectionsthese are sets ofan example is the awesome-chart-elements collection that contains eight awesome elements for working with charts in a web appan example web component you can install is juicy-ace-editoryou can install it by following these processesmake sure you have bower installedelse runnpm install -g bowernow install the ace-editor component like sobower install juicy-ace-editor --savecreate an indexhtml file and import the juicy-ace-editor component like this<link rel=importhref=bower_components/juicy-ace-editor/juicy-ace-editorhtml>and place the component on the page like thisjuicy-ace-editor theme=ace/theme/monokaimode=ace/mode/javascript/juicy-ace-editor>this is an example of the component in the indexhtml filedoctype html>html lang="en"head>meta charset="utf-8"title>document</title>script src="bower_components/webcomponentsjs/webcomponentsminjs"/script>link rel="import"href="html"style type="text/css"#editor-container { positionabsolutetop0pxleft280pxbottomrightbackgroundwhite} </style>/head>body>juicy-ace-editor id="editor-container"theme="ace/theme/monokai"mode="ace/mode/javascript"var user = require'/controllers/userservercontroller'notification = require/controllers/notificationmoduleexports = functionapp{ appget/api'userwelcomepost/api/users'createnewuserdelete/api/user/user_id'deleteoneuser/api/notify'notificationnotifyusers}/body>/html>in the code abovewe referenced a scriptthe webcomponentjs file is the web componentspolyfill for browsers that dont support web components yetwhen you check out your browserthis is how your page will look likefollow the documentation here to install and run it in your web browserit is that simplenow we have a code editor in our browser by just importing a web componentwhoopnowlets go through the web components specifications in order to know how to create acomponentstarting fromhow to create web componentscustom elementsthis is a web component specification that defines how to craft and use new types of dom elementsthere are some ground rules on how to name and define yourthey arethe name of yourelement must contain a dash-for examplefile-reader>and <skype-login>are valid names forwhile <skype_login>skypelogin>are notthis is necessary in order to allow the html parser differentiate between aelement and an inbuilt html elementaelement cant be registered more than oncea domexception error will be thrown if you do sot be self-closingyou cant write aelement like thisskype-login />it should always be written like this/skype-login>element can be created using the customelementsdefinebrowser api method and a class that extends htmlelement in javascript like soclass filebag extends htmlelement { // define behavior here}windowcustomelementsfile-bagfilebaganother option is to use an anonymous class like sowindowclass extends htmlelement { // define behaviour here}with this already definedyou can now use theelement in a web page like sofile-bag>/file-bag>you can define properties on a customelementfor instances add an attribute called open to our <elementthis can be achieved like soclass filebag extends htmlelement { // set theopenproperty set openoption{ thissetattribute} // get theproperty get open{ return thishasattribute}}this refers to the dom element itselfso in this examplethis refers to <once you have done thiselement in your browser like thisfile-bag open="true"noteyou can also define a constructor in the classbut you have to call the supermethod just before adding any other piece of codethere are lifecycle hooks thatelements can define during their existencethese hooks areconstructorhereyou can attach event listeners and initialize stateconnectedcallbackcalled whenever theelement is inserted into the domdisconnectedcallbackelement is removed from the domattributechangedcallbackattrnameoldvalnewvalcalled whenever an attribute is addedremoved or updatedonly attributes listed in the observedattributes property are affectedadoptedcallbackelement has been moved into a new documentyou can reference theelement specification for a lot more informationshadow domthis is a powerful api to combine withit provides encapsulation by hiding dom subtrees under shadow rootsyou can use shadow dom in aelement like soclass extends htmlelement { constructor{ supervar shadowroot = thisattachshadow{modeshadowrootinnerhtml = `<strong>shadow dom super powers for the win/strong>`}}so when you call <p>this is a file bag </p>in the browserit will be rendered like sothe main idea behind shadow dom is to mask all of the markup behind aelement in the shadowsif you inspect the element in the browseryou wont see any of the markup apart from the attributes of the elementthey are hidden under shadow rootsbrowser vendors have been using shadow dom for years to natively implement elements such as <input>audio>video>and many othersanother benefit is that all the styling and scripts inside theelement wont accidentally leak out and affect anything else on the pageyou can reference the shadow dom specification for a lot more informationhtml importshtml imports are a way to include and reuse html documents in other html documentsthe import keyword is assigned to the rel attribute of the link element like so/imports/file-readeryou can reference the html imports for a lot more informationhtml templatethis is a web component specification that defines how to declare pieces of markup at page loadthe <template>tag is placed within the web componentyou can write html and css code within this tag to define how you want the component to be presented in the browseryou can reference the html template specification for a very detailed information on templatingbuild a vimeo embed web componentwell build a web component that will allow users embed vimeo videos into their apps easilys get startedcreate a new html filevideo-embeddefine the html template markup like so-- defines element markup -->style>vimeo { background-color#000margin-bottom30pxpositionrelativepadding-top5625%overflowhiddencursorpointervimeo img { width100%-1682%0opacity7vimeoplay-button { width90pxheight60pxbackground-color#333box-shadow0 0 30px rgba6z-index18border-radius6pxplay-buttonbefore { content"border-stylesolidborder-width15px 0 15px 26border-colortransparent transparent transparent #fffvimeo imgplay-button { cursorvimeo iframebefore { positionbefore { top50%transformtranslate3d-50%vimeo iframe { heightwidthdiv class="vimeo"play-button"/div>/template>we have also added css style to the template tag to define the styling of the vimeo-embed componentthe next step is to actually create thenow add a <script>tag just after the <tag and create it like sofunctiondocumentundefined{ // refers to the "importer"which is indexhtml var thatdoc = document// refers to the "importee"which is vimeo-embedhtml var thisdoc =thatdoc_currentscriptcurrentscriptownerdocument// gets content from <var template = thisdocqueryselectortemplate'content// shim shadow dom styles if needed ifshadowdompolyfill{ webcomponentsshadowcssshimstylingtemplatevimeo'} class vimeoembed extends htmlelement { constructor{ supervar shadowroot = thisopen'// adds a template clone into shadow rootvar clone = thatdocimportnodetrueappendchildclonevar embed = thisgetattributeembed"var video = shadowrootthiscreateandplayembedvideo} createandplayembedidvideoelem{ videoelemaddeventlistenerclick"{ var iframe = documentcreateelementiframe"iframeframeborder"0"allowfullscreen"webkitallowfullscreen"mozallowfullscreen"src"https//playercom/video/"+ embedid + "autoplay=1"width"640"height"360"innerhtml = "} } windowvimeo-embed'vimeoembedwe have the constructor and createandplay methodas i mentioned earlierthe constructor initializes state in thesowe implemented the shadow dom and called the createandplay method in the constructorin the createandplay methodwe simply added a click eventlistener and used javascript to create an iframe and set the required attributesfinally we called windowvimeo-embedto attach the vimeoembed class to vimeo-embedtaghtml importcreate an indexgo ahead and import the vimeo-embedhtml file in it like sohtml>vimeo embed</bower_components/webcomponentsjs/webcomponentswrapper { max-width680pxmargin60px auto 100pxwrapper"vimeo-embed embed="203909195"/vimeo-embed>ohyou can see the webcomponentsjs polyfill referenced in the script taghow did we get thatinstall it via bower like thisbower install webcomponentsjs --savebrowser viewfrom your terminalrun a local servereg http-server to serve up the web pageyour web page should display the component like soload web componentonce you click the play buttonthe video should autoplayvideo should autoplayinspect the page with chrome devtoolscheck out the <video-embed>video embed tagcheck out the shadow dom belowshadow domnow that we have a fully functional vimeo embed web components package it and submit to the registrysubmit to the web component registrythere is aof requirements to adhere to before submitting your component to the registryfollow the instructions belowadd an open source licenseadd a readme and include a demotag a releasego ahead and publishnowyour component should be visible in the registryyaaybrowser support for web componentsgoogle chrome is leading the pack of browsers with stable support for web components in their web and mobile browserstake a look at the browser support matrix belowsourcewebcomponentjsto be safeit is recommended to use webcomponentsjsto provide support for many browserswe used webcomponentsjs during the course of building our ownwebcomponentsjs is a suite of polyfills supporting the web componentsthese polyfills are intended to work in the latest version of browsersweb components capabilities are disabled by default in firefoxto enable themgo to the aboutconfig page and dismiss any warning that appearsthen search for the preference called domwebcomponentsenabledand set it to truetools for building web componentsthere are libraries available that make it easier to build web componentssome of these libaries arebosonicpolymerskatejsx-tag all the libraries highlighted here offer tools to cut down boilerplate code and make creating new components easierpolymer and bosonic also offer a library of ready made web componentsbut polymer remains the most widely used amongst developerscheck out this awesome tutorial on building apps with polymer and web componentsasideeasy authentication with auth0you can use auth0 lock for authentication in your web appswith lockshowing a login screen is as simple as including the auth0-lock library and then calling it in your app like so// initiating our auth0lockvar lock = new auth0lockyour_client_idyour_auth0_domain// listening for the authenticated eventlockonauthenticatedauthresult{ // use the token in authresult to getprofileand save it to localstorage lockgetprofileidtokenerrorprofile{ if{ // handle error return} localstoragesetitemlocalstoragejsonstringifyimplementing lockdocumentgetelementbyidbtn-loginclick{ lockshowshowing lockauth0 lock screenyou can also use theauth0-lock polymer web component for login like soauth0-lock autologin="domain="auth0_domain"clientid="auth0_clientid"profile="/auth0-lock>var firebaserequest = { apiapi"// this defaults to the first active addon if any or you can specify this scopeopenid profile"// defaultopenid }auth0-lock'logged-in'{ consolelog// try to get delegated access to firebase documentdelegatefirebaserequestresult{ consoleconclusionweb components have a lot more benefits than meets the eyeweb components allow for less codemodular code and more reuse in our appsin my opinionthe major selling point of web components is reusability and simplicity of usethe more high quality components developers submit to the registrythe more a plethora of better tools will be available to the community for building better and beautiful web apps in less timehave you been using web components for a whiledo you think web components are the future for web app developmentare they just another hipster technologyll like to know your thoughts in the comment section", "image" : "https://cdn.auth0.com/blog/webcomponents/webcomponentslogo.png", "date" : "March 16, 2017" } , { "title" : "5 Reasons Your Company Needs Identity and Access Management", "description" : "From revenue to employee happiness, identity management has more to offer than you might think.", "author_name" : "Martin Gontovnikas", "author_avatar" : "https://www.gravatar.com/avatar/df6c864847fba9687d962cb80b482764??s=60", "author_url" : "http://twitter.com/mgonto", "tags" : "IAM", "url" : "/5-reasons-your-company-needs-identity-and-access-management/", "keyword" : "identity management seems like just a small piece of the puzzle that keeps your business running smoothlyif theres basic functionality for your system that gets people where they need to beyou should be all setrightunfortunatelyit isnt that simpleidentity management is more than just being able to stick a username and password into a login boxits very difficult to do rightand login will throw a wrench in the works if its done wrongbut when youre committed to the best practices for login managementyour business can benefit in ways you might not have realizedand thats where outsourcing your identity management comes inthere are many reasons why your company needs identity and access managementbut these five are a good place to start1attracting more usersif your business is b2b or b2cyoure always thinking about attracting new usersmaybe youstep up your game to craft some killer ads for social media and a/b test your ads with a platform like adespresso to get the best results from your ad budgetreconfigure your onboarding process to help people realize the value of your product from their first sessionrun extensive mobile analytics for your new app to figure out exactly what you need to tweak to make it betterbut eventually just going through traditional channels like ads and analytics isnt enoughone of the most effectivebut oft-forgotten ways to drive conversions is simply to change loginuser-friendly options like single sign-on and social login can make a big difference in how many people actually sign up — to the tune of a 20% increase in conversionsbut then theres the daunting prospect of configuring across platforms for sso and potential security concerns for social loginto create a great login experience that converts usersyou need a good identity and access management systemiamwith auth0you can implement frictionless sign-up options like social login almost instantlyusing an iam system to implement a secureuser-friendly login is one of the most compelling changes you can make to convert a user2securing your dataunfortunatelywe cant even go a few months without hearing about breaches of login informationas more companies implement an online or app presence and more consumers sign up for those accountsthe stakes of a security compromise only grow with each passing yearthe thought of having a problem with login security should strike fear into every businesss heartbut it can be daunting to find the best way to implement a watertight login systemyour business cant budget in an entire24/7 security detailwellnoyou probably canthoweverwhen you outsource your identity managementyou can get all the strength and expertise of your own security detail without keeping an army of engineers in your officefrom encryption to password breach detectionyour login will be as secure as possible when you outsource your identity managementyou cant spend every second of every day making sure that your login is the most securebut that is precisely what an iam company doesthat wayyou can sleep at night knowing that you and your customers data is safe and sound3supercharging your marketingselling and upselling your product means having a clear picture of your customersinterestsbehaviorsand desires to make marketing materials that really hook them inthe problem is thatmore often than nota customers information is spread out over different platformsyou might have one platform for analyticsone for email messagingone place where you store login informationultimatelythe more you utilize customer informationthe more platforms youll havewhat you need is the ability to consolidate this information to create a powerfulinformation-rich profileyou can streamline the process of creating a profile by automating the transfer of information at loginthe easiest way to do this is with auth0 rulessnippets of code that trigger at login with auth0for exampleyou grab someones profile information from their social login when they sign upthenyou use a rule to automatically enter that information into your platform of choicesay your email messaging platformeasy as thatve nailed that next happy birthday discount message4keeping up with the trendswhether its the latest social login or the latest security protocols paramount that you keep up to date on the best login practicesve got to be able to transition to the latestgreatest versions of loginpasswordless logintheres a lot of new technologyand much of it could radically change the way that we log in to everyday accounts and devicesembracing the future of login means being prepared to use biometric data and increasingly integrated devices in a rapidly growing internet of thingsstaying on top of these shifting technologies is a key to businessespecially startupsuccess — unfortunatelys also not usually your first priority when the bread and butter of your business isnt loginthis is why outsourcing your identity management can continue to return value to you in the long runan iam solution will be able to keep up with changes in social loginadd relevant and novel features as they are adoptedand keep security on the cutting edgethat means you can continuously update your login with almost no work on your endll always be ahead of the login curvegiving customers confidence in your security and the newestmost convenient logins5making life easier for your companyidentity management can make a huge difference in the lives of your employeeswhen you have disparate systems that you need to connect to conduct your businesslogin can become a hassleif youre a smaller businesstaking the time to sync up all of your systems to an easy login can be a difficult thing to do with limited time and resourcesre a larger companythe number of employees you have to sync and the permissions you have to manage can be overwhelmings without even getting into industry requirementsyou can streamline your internal systems with a single sign-onthis will allow your employees to get into all the systems they need with just one login and passwordand to stay logged in across platforms throughout the daythat streamlines the process of getting work donea robust iam system will also allow you to implement the correct permissions and add new accounts without having to think twiceohand any industry requirements that crop upusing an iam solution takes the headaches out of adopting new protocols for your login if ever they come your wayleave your login to the professionalsthe best way to take full advantage of all that an iam solution can offer you is to outsource your identity managementunless you decide to sink your resources into building a full-scale identity management teamyou simply wont be able to tap into the benefits that a robust iam solution can offer youidentity management is more than a simple loginit can offer a real value to your businessfor everything from employee quality of life to revenue generationand it has the potential to be an integral component of your success", "image" : "https://cdn.auth0.com/blog/ga/access-management-logo.png", "date" : "March 15, 2017" } , { "title" : "User Provisioning and Access Request with Auth0 and Webtask", "description" : "A deep look at how we automated our employee access request system using Auth0 as the directory and Webtask for serverless last mile integration with our systems", "author_name" : "Alex Stanciu", "author_avatar" : "https://2.gravatar.com/avatar/71fb37b19e60e1b27a78dc91630dbb29", "author_url" : "https://twitter.com/alecks", "tags" : "provisioning", "url" : "/automating-access-requests-and-provisioning/", "keyword" : "automating access requests &provisioninga deep look at how we automated our employee access request system“give me access to vpn”this is all you need to tell our slack bota request is created and your manager receives an approval requestsome backgroundgranting users access to various resources is a challenge that manyif not mostorganizations faceunfortunatelymany small/medium-sized companies find the cost of purchasing and operating monolithic provisioning systems prohibitiveuser access often becomes a highly manual and messy processhere at auth0 we started out with a google formemployees would fill it out checking off various resources/apps they wanted access tothe form saved into a google sheet where a script would take each entry and create an issue in a github repositoryour it team would then use the repos issues as a request tracking system and manually fulfill themthere are a few problems with this approachno approval mechanismno trackingand no easy way to show what an employee has access tovery hard to automatepretty bad ux… so we decided to build somethinggoalsfor our internal mvpwe wanted to at least address the above pointsbut also set ourselves up so this tool could growprovide an easy way to define resourcesthe things that people can requestfor exampleemail distribution listsbuilding badgesaccess to awsgithublaptopand vpnsas well as work orders like “restore a backup” or “reset mfa”customizablemulti-stepdynamic approval workflows tracking/reporting to help our soc2 auditsmechanism to setup automated fulfillment for things that can be automatedfor exvia apisautomatically add a user to our github organizationfrictionless uxwe are slack bot junkiesso we knew this was a critical integrationarchitecture that will allow expansionextendableif someone wants to convert a resource from manual to automated fulfillmentit shouldnt require changes to the app or redeploymentuse this opportunity to explore any new technologies weve been watching/itching to play withend resultafter three months of on-the-side developmentwe launched our toolcode-named “phenix”noits not misspelledlets do a quick walkthrough of the basic flowin this sample use case well set up a resource to allow employees to request to be added to our auth0 github organizationwe first create the resource and specify a few optionsfor the rest of the resourcewe configure a two-step serial approval workflow that includes manager approval and security team approval for fulfillmentwell keep it manual for now and assign it to the dev ops groupresources also allow the creation of aform to capture data from the user at the time of requestwe will use this to capture the users github userid and a commentnow that the resource is configureds request itapprovalsthe above request will step through each stage of the approval processif all stages are approvedthe request will switch to fulfillment modeif any stage is rejectedthe approval process stops and the request is finishedwe configured an approval stage called dynamic approval for the manager because each requester could have a different managerwe need to determine who the approver is at run-timewhen the request is processingwe do this using the webtask platformsetting the approval type as dynamic allows the creation of a webtask where the end userthe administratorcan writecode that figures out who the manager isclicking edit opens the webtask editor where we have simple code to retrieve the manager userid from the request beneficiarys profilethe manager info is saved in the auth0 users app_metadata attributethis is currently populated via an existing outside script that syncs our hr system with auth0 profilesif we didnt have this data already in auth0this code would instead make an api call to our hr systembamboohrto get the users managerwhen the request is submittedthe approvers receive a ticket in their inbox representing a pending approval taskfor the above requestsince i am an admini will be able to see both approval ticketsfor both the manager and security stagesi will approve both and then reject the fulfillment requestnormally these three steps would be performed by separate peoplewe can see the final status of this request asas you can seefor each stage it correctly shows who the intended approver/fulfiller was and who actually took the actiongreat for our audit historyslackwhen the approval tickets are createdthe approvers also receive a message in slack that they can approve or rejectif the approval is set to a groupwe can create mappings of groups to slack channels so only one message is posted in the groups channelthe system ensures that only designated approvers are allowed to click the buttonsnot everyone in the #devops channel is in the devops group we also wanted the ability to create requests from slackthis proved to be a bit more challenging in the endbut a ton of fun this is what it looks likeit may not be immediately evident but there is a lot going on herefirstthe bot needs to keep track of different conversationswith different usersacross different slack teamsthis is easy enoughbut it also needs to keep track of where in the conversation it isit accepts some built-in commandslike helpetcbut if someone says something like“i need access to vpn”this is not a built-in commandto make sense of thiswe use apiais nlp services to process these kinds of phrases and detect if the user said something that matches making a request“give [user] [resource]”“request [resource] for [user]”“i want access to [resource]”etc…once it detects that the user wants a resource and for whomyou can request things for other peopletooit then looks to see if that resource has a form definedif sowe start a “conversation” with the userprogressively asking for the data in the form the hard part here was keeping track of it allwith slack botsthere is no inherent concept of a sessionso we had to build thisthe bot could receive a message like “jdoe35” and it would need to figure out that user xon team yis making a request for resource z which has five form fields and this is the response to the third fieldwhich we must have previously asked forex“enter your github user id” automating the fulfillmentin the above example request for githubthe resource was configured for manual fulfillmentit means the designated fulfiller received a ticket representing the to-do itemthey would manually do the workadd the user to our github organizationand mark it as doneto automate thisyou can configure automated provisioning via a webtaskthis is a very simple and straight-forward way to quickly call an api and get something donein the fulfillment tab of the resourcewe can select “webtask auto fulfillment”in the webtask codewe can grab the users github userid from the form submitted with the request and make the api callusing the webtask platform for extensibility was a huge saverwe can let the administrators and end users customize the tool without burdening the development teami wish every saas/webapp had something like thiswhereyou might be given a configuration choice between a or bbut maybe you want that to change depending on certain factorsthe webtask platform allowed us to add this third option c to figure it out at runtime by running your owncode trackingnow that a whole system is in places very easy to keep track of who has what and whybelow is a very crude interface that shows the basicsbut the data is there and generating compliance reports is now trivialbuilding a lightweight certification mechanism on top of this would also be fairly straight-forwardcertification is the process of periodically asking someoneusually a managerif a user should still have access to xthereby catching sensitive access that was only temporarily neededengineeringfrom the start we wanted to design and engineer this application as if it might some-day become a productthis meant building it from the ground up with scalingmulti-tenancysecurity and performance in mindwe also wanted to try out some new technologies and patternsthe front end is a reactsingle page appnothing too fancy going on here beyond current modern standards and recommendationscode splittinglazy loadingetc…we decided to try out graphql as the api interfacethe data model fits pretty well for the use casethere are lots of joins happening in the data model and with graphql the client can get everything in one shot to keep the api as light as possiblewe split out the request processing and notification services into separate workers and handed them tasks via a queuethe bot also follows this conventionwe decided to not use slacks rtm api since it uses websocketswhich come with different types of scaling problemsinsteadwe use slacks event api which functions like a webhookwe subscribe to chat messages and slack does a post to us with those messagesthe trick here is that there could be many teamswith many channelsand if the bot is invited to very chatty channelsit will result in lots of posted messages from slackif the bot is in a channelversus direct messageit only responds if called by name and it responds in a threadthis minimizes “bot-spam” in public channelsto handle thiswe have an extremely light-weight http server that receives the posted messagereplies back to slack with a 200and puts the data in a queuea separate bot worker processes the messages and replies to the userwe use redis to store the conversation sessionsso while making a request and answering questionsthere could be many bot workers actually replyingspeaking of rediswe also have a fairly standard caching layer for things that dont change that oftenresourcesgroupsuserstenant settingsapi tokenswe proxy those through a layer of local cache -> redis -> mongoa solid caching strategy is especially important when using graphqlgoing forwardin its current statephenix is a fairly solid access request platform with some light provisioningwe are planning to keep building on top of this to create a robust provisioning engine with full connector support that handles all types of crud operations against target systemswe are also building a reconciliation engine to sync data into the system from other sources while phenix is an internal tool for nowwe are aware that it could be beneficial to others and we may decide to open it laterif you think something like this would be useful to your organizationor if you have any other thoughts or questions on this topicplease leave us a comment", "image" : "https://cdn.auth0.com/blog/access-requests/logo.png", "date" : "March 14, 2017" } , { "title" : "Critical Vulnerability in JSON Web Encryption", "description" : "JSON Web Encryption is vulnerable to a classic Invalid Curve Attack. Learn how this may affect you and what to do about it.", "author_name" : "Antonio Sanso", "author_avatar" : "https://www.gravatar.com/avatar/56227bdd539c480cc054eaa72eb1885d?s=200", "author_url" : "https://twitter.com/asanso", "tags" : "JWE", "url" : "/critical-vulnerability-in-json-web-encryption/", "keyword" : "tldr if you are using go-josenode-josejose2gonimbus jose+jwt or jose4 with ecdh-es please update to the latest versionrfc 7516 aka json web encryptionjweand software libraries implementing this specification used to suffer from a classic invalid curve attackthis can allow an attacker to recover the secret key of a party using jwe with key agreement with elliptic curve diffie-hellman ephemeral staticecdh-eswhere the sender could extract receivers private keypremisein this blog post i assume you are already knowledgeable about elliptic curves and their use in cryptographyif not nick sullivans arelatively easy to understandprimer on elliptic curve cryptography or andrea corbellinis series elliptic curve cryptographyfinite fields and discrete logarithms are great starting pointsthen if you further want to climb the elliptic learning curve including the related attacks you might also want to visit https//safecurvescrypto/also the djb and tanja talk at 31c3 comes with an explanation of this very attacksee minute 43or  juraj somorovsky et als research can become handy for learnersnote that this research was started and inspired by quan nguyen from google and then refined by antonio sanso from adobeintroductionjson web tokenjwtis a json-based open standardrfc 7519defined in the oauth specification family used for creating access tokensthe javascript object signing and encryptionjoseietf expert group was then formed to formalize a set of signing and encryption methods for jwt that led to the release of  rfc 7515 aka json web signaturejwsand rfc 7516 aka json web encryptionin this post we are going to focus on jwea typical jwe is dot separated string that contains five partsthe jwe protected headerthe jwe encrypted keythe jwe initialization vectorthe jwe ciphertextthe jwe authentication tagan example of a jwe taken from the specification would look likeeyjhbgcioijsu0ett0ffucisimvuyyi6ikeyntzhq00ifqokoawdo13grp2ojahv7lfpzcgv7t6dvzktykomtyumkotcvjrgckcl9kimt03jgeipsedy3mx_etlbbwsrfr05klzcsr4qkaq7yn7e9jwqrb23nfa6c9d-stnimgyfdbsv04uvuxip5zms1gnxkkk2da14b8s4rzvrltdywam_ldp5xnzaypqdb76fdiklavmqgfwx7xwrxv2322i-vdxrfqnzo_tetkzpvlzfiwqyeypglbio56yj7eobdv0je81860ppamavo35ugordbyabcoh9qcfylqr66oc6vfwxrcz_zt2lawvcwtiy3brgpi6uklfcpimfijf7igdxkhzg48v1_alb6us04u3b5eym8tw_c8suk0ltj3rpyizoedqz7talvtu6ug9omo4vpzs9tx_efshs8ib7j6ji
     sdiwkir3ajwqzabtqd_axfbomyuzodetzdvtifvskqthis jwe employs rsa-oaep for key encryption and a256gcm for content encryptionthis is only one of the many possibilities jwe providesa separate specification called rfc 7518 aka json web algorithmsjwalists all the possible available algorithms that can be usedthe one we are discussing today is the key agreement with elliptic curve diffie-hellman ephemeral static  this algorithm allows deriving an ephemeral shared secretthis blog post from neil madden shows a concrete example on how to do ephemeral key agreementin this case the jwe protected header lists as well the used elliptic curve used for  the key agreementonce the shared secret is calculated the key agreement result can be used in one of two waysdirectly as the content encryption keycekfor theencalgorithmin the direct key agreement modeoras a symmetric key used to wrap the cek with the a128kwa192kwor a256kw algorithmsin the key agreement with key wrapping modethis is out of scope for this post but as for the other algorithms the jose cookbook contains example of usage for ecdh-es in combination with aes-gcm or aes-cbc plus hmacobservationas highlighted by quan during his talk at rwc 2017decryption/signature verification input is always under attackers controlas we will see thorough this post this simple observation will be enough to recover the receiverbut first we need to dig a bit into elliptic curve bits and pieceselliptic curvesan elliptic curve is the set of solutions defined by an equation of the formy2 = x3 + ax + bequations of this type are called weierstrass equationsan elliptic curve would look likey2 = x3 + 4x + 20in order to apply the theory of elliptic curves to cryptography we need to look at elliptic curves whose points have coordinates in a finite field fqthe same curve will then look like below over finite field of size 191y2 = x3 + 4x + 20 over finite field of size 191for jwe the elliptic curves in scope are the one defined in suite b andonly recentlydjbs curvebetween thosethe curve that so far has reached the higher amount of usage is the famous p-256time to open sagelets define p-256the order of the curve is a really huge number hence there isnt much an attacker can do with this curveif the software implements ecdh correctlyin order to guess the private key used in the agreementthis brings us to the next sectionthe attackthe attack described here is really the classical invalid curve attackthe attack is simple and powerful and takes advantage from the mere fact that weierstrasss formula for scalar multiplication does not take in consideration the coefficient b of the curve equationy2 = ax3 + ax + bthe originals p-256 equation isas we mention abovethe order of this curve is really bigso we need now to find a more convenient curve for the attackereasy peasy with sageas you can see from the image above we just found a nicer curvefrom the attacker point of viewthat has an order with many small factorsthen we found a point p on the curve that has a really small order2447 in this examplenow we can build malicious jwessee the demo time section belowand extract the value of the secret key modulo 2447 with complexity in constant timea crucial part for the attack to succeed is to have the victim to repeat his own contribution to the resulting shared keyin other words this means that the victim should have his private key to be the same for each key agreementconveniently enough this is how the key agreement with elliptic curve diffie-hellman ephemeral staticworksindeed es stands for ephemeral-static were static is the contribution of the victimat this stage we can repeat these operationsfind a new curvecraft malicious jwesrecover the secret key modulo the small ordermany many times and collecting information about the secret key modulo many many small ordersand finally chinese remainder theorem for the winat the end of the day the issue here is that the specification and consequently all the libraries i checked missed validating that the received public keycontained in the jwe protected header is on the curveyou can see the vulnerable libraries section below to check how the various libraries fixed the issueagain you can find details of the attack in the original paperdemo timeinstant demo click hereexplanationin order to show how the attack would work in practice i set up a live demo in herokuin https//obscure-everglades-31759herokuappcom/ is up and running one nodejs server app that will act as a victim in this casethe assumption is thisin order to communicate with this web application you need to encrypt a token using the key agreement with elliptic curve diffie-hellman ephemeral staticthe static public key from the server needed for the key agreement is in httpscom/ecdh-es-publicjsonan application that wants to post data to this server needs first to do a key agreement using the servers public key above and then encrypt the payload using the derived shared key using the jwe formatonce the jwe is in place this can be posted to httpscom/secretthe web app will respond with a response status 200 if all went wellnamely if it can decrypt the payload contentand with a response status 400 if for some reason the received token is missing or invalidthis will act as an oracle for any potential attacker in the way shown in the previous the attack sectioni set up an attacker application in https//afternoon-fortress-81941com/you can visit it and click therecover keybutton and observe how the attacker is able to recover the secret key from the server piece by piecenote that this is only a demo application so the recovered secret key is really small in order to reduce the waiting timein practice the secret key will be significantly largerhence it will take a bit more to recover the keyin case you experience problem with the live demoor simply if  want to see the code under the hoodyou can find the demo code in githubhttps//githubcom/asanso/jwe-receiver contains the code of the vulnerable servercom/asanso/jwe-sender contains the code of the attackervulnerable librarieshere you can find aof libraries that were vulnerable to this particular attack so farnode-jose v093 include the fixes necessarywhich was published few weeks agohere the gist of the original proof of concept*jose2gos fix landed in version 13nimbus jose+jwt pushed out a fixed artifact to maven central as v4342**jose4 now comes with a fix for this problem since v05**go-josethis is the original library found vulnerable by quan nguyensome of the libraries were implemented in a programming language that already protects against this attack checking that the result of the scalar multiplication is on the curve* latest version of nodejs appears to be immune to this attackit was still possible to be vulnerable when using browsers without web crypto support** affected was the default java sun jca provider that comes with java prior to version 180_51later java versions and the bouncycastle jca provider do not seem to be affectedimproving the jwe standardi reported this issue to the jose working group via mail to the appropriate mailingwe all seem to agree that an errata where the problem is listed is at least welcomedthis post is a direct attempt to raise awareness about this specific problemacknowledgementthe author would like to thanks the maintainers of go-josenimbus jose+jwt and jose4 for the responsiveness on fixing the issuefrancesco mari for helping out with the development of the demo applicationtommaso teofili and simone tripodi for troubleshootingfinally as mentioned above i would like to thank quan nguyen from googleindeed this research could not be possible without his initial incipitthats all folksfor more crypto goodiesfollow me on twitterabout antonio sansoantonio works as senior software engineer at adobe research switzerland where he is part of the adobe experience manager security teamantonio is co-author ofoauth 2 in actionbookhe found vulnerabilities in popular software such as opensslgoogle chromeapple safari and is included in the googlefacebookmicrosoftpaypal and github security hall of famehe is an avid open source contributorbeing the vice presidentchairfor apache oltu and pmc member for apache slinghis working interests span from web application security to cryptographyantonio is also the author of more than a dozen computer security patents and applied cryptography academic papershe holds an msc in computer science", "image" : "https://cdn.auth0.com/blog/jwtalgos/logo.png", "date" : "March 13, 2017" } , { "title" : "Using Serverless Azure Functions with Auth0 and Google APIs", "description" : "Learn to use Node with Azure Functions with Google APIs and Auth0.", "author_name" : "Steve Lee", "author_avatar" : "https://s.gravatar.com/avatar/9cf3b2625e7daf7aa5d01542fbedb2c5", "author_url" : "https://twitter.com/SteveALee", "tags" : "javascript", "url" : "/using-serverless-azure-functions-with-auth0-and-google-apis/", "keyword" : "guest post by @stevealee of opendirectivecomtldrlearn how nodejs backend code via azure functions can access a google api once a user logs in with google via the auth0 lock widgetwithout a doubtauthentication for web apps is one of the most complex features to implement correctlyif youre not carefulit will eat a large chunk of your development timeworseif you dont get it exactly right youre left vulnerable to being hackedwhich will take even more of your precious timenot to mention damaging your reputationthereforeits nice to have auth0 around to help mitigate this problem with their flexible service along with some of the best documents and support in the businessi picked a complex case as my first attempt at auth for a single page appspasoftware as a servicesaasproductthis post is the story of my experience along with some working javascript code for azure functions with auth0serverless architectureazure functions are part of microsofts offering in the relatively new serverless architecture spacesometime referred to as functions as a servicefaasserverless architecture allows you to concentrate your development offerts on you ‘business logicor backend application codein this extension of platform as a servicepaasmicrosoft manage all the lower layers of the hardware and software stack for youfor exampleserversoperating systemsweb servers and even platforms such as nodejsnote that serverless code is event driven and triggers may be http requests but can also be from other sources such as a database updatethis introductory article on martinflowercom explains a web app use of serverless architecture and also links to a very thorough post by mike robertsthe problemim developing a set of open source components used in a commercial saas designed to support the needs of people with cognitive disabilities or low digital literacythe initial components and product will provide simplified access to shared photographs and emailgiven thisgoogle picasa and gmail seemed like natural choices for the initial underlying servicesunfortunatelythe picasa api has been feature stripped recently when google moved over to google photosmy initial requirement for the user experience is that they can easily authenticate by signing into their existing google accountthe code should then be able to access their photos and emailsusing the picasa and gmail apisthis will require authorized access based on the user credentials provided when they sign inthe initial user story that we cover in this post isas a useri want to log into the app with my google account so i get aof my google photos albumsthat all seemed fairly straightforward after spending some time learning the basics of oauth and openid flows from a mixture of auth0 and openid documentationthen i read the various google api and auth docs and ended up confusedgoogle spreads the documentation around several places and it is not always consistent or precisein additiongoogles docs are often unclear on whether they are describing access from a client or backend and which specific authentication flows they are talking aboutfinallythey often use their own sdksor librarieswhich obscures the details and is largely irrelevantthis also adds another large download for client usersgetting nowhere very slowlyafter exploring the google apis with some experimental code accessing them directly from the spa i wanted to pull my hair outthe picasa api in particular is very flaky in how it handles cors and authenticationplan b was to use auth0 to do all the heavy liftingmy hope was their lock widget would solve the technical issues relatively easilylock handles the nonce and state attributes used to stop hackinglock is also flexible in user experience optionsfor example it easily allows the addition of extra serviceshoweveri soon found out the access_token that lock provides to a spa is not usable in google apis and it was hard to find any answersat this pointi started to think that backend access was going to be the solutionin addition to reliable access theres also the question of what to do when tokens expirewe need to avoid having the user keep logging inso refresh tokens will be required which must be stored securely in the backendas they effectively allow endless accessseveral other design requirements pointed to backend accessand using azure functions meant a rapid development and relatively low devops requirementswin - wini found after more experimental code that this did eventually work outbut only after i stumbled across a highly relevant auth0 document and requested help from the awesome nicoa customer success engineer at auth0as nico pointed outif you use auth0 as the identity provider then even when proxying other third party identity providersthe access_tokens you get are from auth0they can be used with auth0 apis or your ownbut are not what third party apis requireauth0 does provide a mechanism for backend code to get the access_token from third party identity providersthe token is hidden in the auth0 ui for security purposesauth0 and azure functionsmaking life easywithout further delayheres the low-down on what you need to do to let a user sign in with google via the auth0 lock and then access a google api with their credentialsusing the google access_tokenill also present some links to important docss the complete flow we usespa displays the auth0 lock passing suitable optionsuser logs in with googleapproving access to requested scopeseg read photosread emailsif requiredauth0 creates a new auth0 user linked to the google userspa gets the auth0 user id_token and access_tokenspa calls the backend http endpoint to get aof photosetcand passes the access_token with this requestbackend azure functions validates the jwt and optionally checks the user is allowed accessbackend uses the userid in the access_token to find the user profile using the auth0 admin apibackend extracts the google access_token from the users profilebackend calls the google picasa api and processes the resultsreturning them to the spa in the http responsein order for this to workyou need to have the following configureda google photos account with some photospreferably in several albumsauth0 web client for the spa - authentication for client-side web appsgoogle oauth client for backend access to apis - connect your client to googleauth0 api definition for the api - call apis from client-side web appsauth0 non-interactive client for backend access to auth0 management api - call an identity provider apiazure account and an azure functions appyou should also readauth0 overviewidentity provider access tokenslock for webcreate your first azure functionhere is a simple vanilla html and javascript example that allows the user to sign in with the auth0 lock and then calls the azure functions backend to get aof google photos albums<doctype html>html lang=en>head>meta charset=utf-8meta name=viewportcontent=width=device-widthinitial-scale=10title>auth0 and google apis</title>script src=https//cdnauth0com/js/lock/1091/lockmin/script>/head>body>button id=btn-loginlogin</button>btn-getget albums<pre id=profile/>script>function getgooglealbumsaccesstoken{ var azurefunction =azure function url herevar xmlhttp = new xmlhttprequestxmlhttponreadystatechange = function{ ifthisreadystate == 4 /*&&status == 200*/{ alertstatus+rn+thisresponsetextreplace//g} } xmlhttpopengetazurefunctiontruesetrequestheaderauthorization`bearer ${accesstoken}`send} var lock = new auth0lockthis clients id heredomaineucom{ allowedconnections[google-oauth2]allowforgotpasswordfalseallowsignupclosableauth{ connection_scopes{//picasawebcom/data/] }params{ scopeopenid profile photosaudience//api_id here}responsetypeid_token tokenlanguagedictionary{ titlesign into google} }// listening for the lock authenticated event lockonauthenticatedfunctionauthresult{ localstoragesetitemlocalstorageidtokenlockgetuserinfoerror{ if{ // handle error return} localstoragejsonstringifydocumentgetelementbyidaddeventlistenerclick{ lockshow{ var accesstoken = localstoragegetitemvar idtoken = localstoragegetgooglealbumsfunction view{ // verify that theres a token in localstorage var token = localstorageiftoken{ showprofile} } function showprofile{ var profile = jsonparsenull2textcontent = profile} view/body>/html>now for the azure functions backend codethis is a javascript http azure function with the method set to gettokens are passed from the frontend code above in a url parameternotethis initial block of constants should not normally be included in the main codeif only to stop you accidently checking your secrets into githubrather its good practice to place them in the function app services settings and reference them from the code// constants const auth0_domain_url =//domainconst auth0_api_id =//api_idconst auth0_signing_certificate = `-----begin certificate-----<get this from the auth0 client advanced settings ->certificates>-----end certificate-----`const auth0_admin_client_id =your admin client app idconst auth0_admin_client_secret =your admin app client secretthis main body of the code can be added via the azure functions console// create decorator that checks the jwt signature and specified fieldsconst jwtvalidatedecorator = require/azure-functions-auth0{ clientidauth0_api_idclientsecretauth0_signing_certificatealgorithmsrs256`${auth0_domain_url}/`}// the main functions functionmoduleexports = jwtvalidatedecoratorcontextreq=>{ ifuser{ // get a token to access the admin api getadminaccesstokenthen{object{access_token}}{ const userid = reqsub // has been added to the req by the decorator return getuserprofileaccess_tokenuserid// get the albumfrom google{object}{ const google_access_token = objectidentities[0]access_token // hidden from the auth0 console return getalbumsgoogle_access_token// get the album titles{feed{entry}}}{ const titles = entrymapent =>enttitle$treturn { status200bodytitlesheaderscontent-typeapplication/json} } }catcherr =>{ return { status400errmessage } }res =>{ contextdoneres} else { const res = { statussomething is wrong with the authorization token} context}}here are the supporting functions called from the main code block abovethey can be placed in the same function for simplicityan alternative is to place them in a separate module file and “require” them as usual with nodeazure functions allows you to provide several functions and supporting code in a single functions appconst request = requirerequest// call a remote http endpoint and return a json objectfunction requestobjectoptions{ return new promiseresolvereject{ requestresponse{ if{ reject} else if200 >statuscode299 <new error`remote resource ${optionsurl} returned status code${responsestatuscode}${body}`} else { const object =typeof body ===stringbody // fixme throws resolve{codeobject}} }}// get an access token for the auth0 admin apifunction getadminaccesstoken{ const options = { methodposturl`${auth0_domain_url}/oauth/token`{ client_idauth0_admin_client_idclient_secretauth0_admin_client_secret`${auth0_domain_url}/api/v2/`grant_typeclient_credentialstrue } return requestobject}// get the users profile from the admin apifunction getuserprofile{ const options = { method`${auth0_domain_url}/api/v2/users/${userid}``bearer ${accesstoken}` } } return requestobject}// get user google photos album listfunction getalbums//url`https//wwwgoogleapiscom/gmail/v1/users/me/labels`com/data/feed/api/user/defaultalt=json`bearer ${accesstoken}` } } return requestobject}we need to check the auth0 access_token is valid before allowing the api code to be executedthis is done by a decoratoror wrapperfunction based on the npm azure_functions_auth0 module but modified to work correctly with an auth0 api access_token// azure_functions_auth0js// based on the npm package azure-functions-auth0// but modified to handle the auth0 api accesstokenconst jwt = requireexpress-jwt//import argumenterror from/errors/argumenterrorconst argumenterror = errormoduleexports ={ ifoptions instanceof object{ throw new argumenterrorthe options must be an object} ifclientidlength === 0the auth0 client or api id has to be providedthe auth0 client or api secret has to be providedthe auth0 domain has to be provided} const middleware = jwt{ secretissueralgorithms }returnnext{ return{ middleware{ if{ const res = { statusstatus500{ messagemessage } }return context} return nextrunning the codefor a local client development serveri simply installed npm packagelite-serverconfigured to port 8000 with a ‘bs-configfilefor the backendyoull need to create an http azure function with the method set to getll also need to install the two npm dependencies of express-jwt and requestin the azure functions control panel go tofunctions app settings->consoleto open up a consolethen cd to the folder for your function and enter the following commandnpm install express-jwt requestyoull also need to set up cors by adding your client url - eglocalhost8000this is found in the azure functions console panel and click onfunction app settingsconfigure corscopy the functions url into the spa code constants blockobservationsas this is a serverless backend with no local state storagethe same authorization code will run for every similar endpointwe can tidy up the code to be more drydont repeat yourselfby moving the code to get the auth0 admin and google access_tokens into a module shared by all your functions in the function appconclusionauth0 provides all the features needed to access google apis with a users credentialswhen a user signs in through auth0 you get an auth0 access tokenyou then need to obtain the third party access token for googles apisthis is done with backend code for securitythe code accesses the users profile via the auth0 admin api and can then obtain the access token provided when the user signed in with googleazure functions provides an ideal way to create the backend code in nodejs without the need to create and configure servers or node itselfan http function is easy to create and configure via the azure functions control panelor everything can be done locally and then deployed to azurebest of allboth auth0 and azure functions provide free subscriptions that allow you to explore them in detailhave fun", "image" : "https://cdn.auth0.com/blog/azure-functions-and-auth0/logo.png", "date" : "March 10, 2017" } , { "title" : "Serverless development reimagined with the new Webtask Editor", "description" : "We've just shipped a brand new editor for Webtask to go from 0 to code in seconds!", "author_name" : "Javier Centurion", "author_avatar" : "https://s.gravatar.com/avatar/a5878db74baa36ad0ae9cda759f9f2f8.jpg?s=60", "author_url" : "https://twitter.com/jcenturion86", "tags" : "Serverless", "url" : "/serverless-development-reimagined-with-the-new-webtask-editor/", "keyword" : "serverless development reimagined with the new webtask editorif you are building serverless applicationsthen you want to get from zero to code in secondsweve just shipped a brand new editor for webtask which makes this desire a realitythe webtask editor is a rich online environment for creatingediting and testing your webtasksin additionit allows you to manage secretsconfigure github two-way syncview realtime logsand moreit makes serverless development a breeze and you never have to leave the browser or install anything to use itand with our out-of-the-box support for over 1000+ node modulesyou can get a lot of work donelets take a quick walkthrough of the experiencecreating a new webtaskwith the new editorgetting started cant get any easierjust head to webtaskio/makelog in with your preferred credentialsand youll be on your wayfrom the popup dialogyou will see a few optionswebtaskthis creates an empty webtaskcronthis creates an empty scheduled webtaskpick a templatestart coding based on selecting from a library of templatesimport from githubimport your code from a github repo to a webtaskwebtaskselectingwill put you right into the editor where you can start authoring a new taskcroncron tasks are great for executing a task on a schedulesuch as checking a twitter feed for mentionswhen you create a new cron taskyou will see two panesthe left pane is the scheduler where you specify the schedule for your task and the right pane is where you put the code for your taskfor more info about croncheck this documenttemplatestemplates let you choose from a selection of starter code that you can use for building your tasksve included templates for integrating webtask with common services like stripeslacksendgridgithubtwiliofacebookand many moreimporting from githubif you have existing webtasks in a repoyou can import them directly into webtask by pointing to the repoeditor featuresnow lets take a look at some of the awesome editor featuresrunner and logsweve designed the new editor to streamline your development and allow you to iterate fastto help with testingthe editor comes with an intergrated runnerin the runner you can set different http methodsparametersheadersetcto help with debuggingve added a realtime logs viewer that lets you view you tasks console output while it is executingsecrets managementif your tasks are talking to other authenticated servicesyou dont want to store credentials in the codeyou can define new secure secrets right in the editorwhich are then accessible from the code via the context objectgithub integrationto take your experience up a notchve baked in github integration supportthis allows you to sync your webtask with a file in a github repoyou can enable this to work bi-directionallysuch that commits and pull requests to the repo result in the task automatically being deployed and any changes in the editor result in commits to the repoyou can also bind multiple tasks to different branches of the same repothus having devtestand prod versions of your tasksits super powerfultask managementpress cmd + p or click on theor on thewebtasksicon and youll see aof all your tasksfrom theyou can switch to a different taskopen a task in a different windowor even delete tasksyou can type into the search bar to filter theof displayed tasksshortcutsthe new editor has tons of shortcuts for common actions within the editor as well as additional features like beautifying your codeyou can see theof shortcuts by clicking on the shortcuts icon in the upper right cornercli supportif you like using our clive got you coveredyou can go right from the shell to the editor with the edit commandiewt edit mytaskgo try itthe new webtask editor is an amazing tool for serverless developmentit will let you instantly go from idea => code => runningnot only do you get a rich browser-based authoring experiencebut you get a tool to secureand debug your codego get started playing with the webtask editor nowalso check out our documentation at webtaskio/docs/editor", "image" : "https://cdn.auth0.com/blog/webtask/logo.png", "date" : "March 09, 2017" } , { "title" : "3 Easy Practical Steps You Can Take To Drive More Users To Convert", "description" : "Yes, your login can help you raise your conversion rate — here's how.", "author_name" : "Diego Poza", "author_avatar" : "https://avatars3.githubusercontent.com/u/604869?v=3&s=200", "author_url" : "https://twitter.com/diegopoza", "tags" : "conversion", "url" : "/three-easy-practical-steps-you-can-take-to-drive-more-users-to-convert/", "keyword" : "after users sign-upthey get into your fantastic appyou show them the basic featuresand youve even done some data analysis to figure out what users need to do to increase their chances of sticking aroundwhen theyre intheyre hookedbut your conversion rate still isnt greatsomething is stopping people from taking the plunge and starting to use your appwhich is constraining your growthif this has you nodding alongyou might be suffering from a problem at logineven if you have a great productyoure not going to succeed unless you actually get users to joinits time to make a change to the way youre getting users to sign-upand we have some tips1use social loginone of the best ways to improve your conversion rates is to use social loginwhybecause its one of the easiest ways for people to sign-upthe benefits of social sign-up are clear — anywhere from a 20-60% increase in conversion — and it works for users because they dont have yet another name and password to rememberone click and theythis means thatwhen a user finds your appthey can sign-up almost without thinking about it — theres nothing to make them stop and consider whether or not its worth the effort to sign-up before theyve even seen what you can really doin the same veinyou can setup your login with a smart sign-up formwith thisyour users dont have to fill in a whole form during sign-upbecause youve filled it in for theman example of this is frictionless sign-upan integration by segment that you can use with your logina user puts in their emailto get their sign-up startedthenthe segment integration retrieves and fills in their name and job information filled outspeeding up the sign-up processtaking the hassle out of your userssign-up experience is a no brainerand with the broad range of social login options availabletheres no reason you cant add a few networks and tricks to your login or sign-up form2show them their friendsfomo isnt just a trend — its an actualpsychologically rooted phenomenonpeople want to know what their friends are doingand feel anxious when theyre left outand if you can tap into that psychology by showing people their friends using your appre going to be more likely to get them past your logincheck out how facebook pulls yourof friends from your facebook profile to encourage you to use messengersourcehttps//blogsadobecom/creativecloud/xd-essentials-user-onboarding-and-empty-states-in-mobile-apps/the easiest way to tap into this is by making it a breeze for your users to invite their friends to your appensuring that your product is convenient to share with friendssuch as configuring app invites on facebookcan be especially effectiveif someone gets your app recommended via an invitation from a friendthat exposure will help you convert that person into a userfor exampleif you and your buddy are really into socceryou might be much more tempted to sign-up for world cup challenge if you see that theyve invited you to play with them//blogbranchio/how-to-deep-link-on-facebookshowing people their friends as soon as they get past the login page is also a great motivator to stay on social appsthe trick is to configure your social login to acquire their friends from their profileimmediately seeing their friends will make people feel like your app is a natural extension of their networkshowing potential users right at sign-up that they know people who are active on your app will drive them to convert because they have a natural desire to see what their friends are up to — and what theyve missed out on3personalize their first experiencejust as showing people their friends when they sign-up is a good way to demonstrate the value of your apppersonalizing a users first experience can help you get people from signing up and onboarding to being a dedicated userone easy way to connect from the very first login is to use geography to your advantagewhether youre showing them recommendations in the areagetting regional news on their feedor sampling a rainy day playlist to match the weatherusing their location to customize what they see at sign-up will help them feel welcomedand let them know that they can personalize their own experiencefoursquare asks for a users location during sign-upwhen they enter the appnew users immediately see suggestions near themthis demonstrates the value and relevance of their app immediately//wwwuseronboardcom/how-foursquare-onboards-new-users/slide=66another way you can hook users into your app is by connecting them to their interestsno matter the focus of your apps a way to connect users with whats relevant to their lifeif you know a user likes the nbayou could suggest they follower players or teamsif theyre a film buffhave them save the location of their favorite theatrethis is another place where configuring social login can help you — just as you pulled their friends from their profileyou can also look at pages theyve liked or followedthis will help you curate options to show your new users as they work through sign-up and onboarding — which will encourage them to actually get into your appgive new users what they wantthe consistent factor between all three methods to improve retention that weve discussed is simply giving users what they wantsocial login reduces friction by eliminating the need for users to remember another username and passwordshowing users their friends at sign-up plays into our desire to connect with our friendswhile personalizing from the first login connects users directly with their intereststhe old saying theres no second chance at a first impression is never more true than when configuring your logindriving conversions means catering to your users from your very first interactionsetting up your login with your userswants in mind is a great way to raise your conversion rate", "image" : "https://cdn.auth0.com/blog/ga/more-users-logo.png", "date" : "March 08, 2017" } , { "title" : "Managing State in Angular with ngrx/store", "description" : "Learn how to manage application state with ngrx/store: reactive Redux for Angular.", "author_name" : "Kim Maida", "author_avatar" : "https://en.gravatar.com/userimage/20807150/4deb2db17135af46f17d5cda3b58fd0d.png", "author_url" : "https://twitter.com/KimMaida", "tags" : "angular", "url" : "/managing-state-in-angular-with-ngrx-store/", "keyword" : "get themigrating an angularjs app to angular bookfor freespread the word and download it nowtldrin this articlewell explore managing state with an immutable data store in an angular application using ngrx/storereactive redux for angularll also authenticate our app with auth0 and implement route authorization with route guardsthe final code can be cloned from this github repositorymanaging state in angular appsstate management in largecomplex applications has been a headache plaguing angularjs / angular developers over the last few yearsin angularjsversion 1xstate management is often addressed using a confusing mixture of serviceseventsand $rootscopein angularversions 2+component interaction is cleaner but can still be quite involvedencompassing many different approaches depending on the desired direction of the flow of datanoteangularjs refers specifically to version 1x of the framework while angular refers to versions 2x and upas per the branding guidelines for angularsome developers are using redux with angularjs or angularredux is apredictable state container for javascript appsand supports a singleimmutable data storeredux is best-known for its use with reactbut it can be utilized with any view libraryeggheadio hosts an excellent free video series on redux from its creatordan abramovintroducing ngrx/storefor our angular applicationre going to use ngrx/store rather than reduxwhat is the relationship between redux and ngrx/store and why would we prefer one over the otherrelationship to reduxngrx/store is anrxjs powered state management library for angular applicationsinspired by reduxauthored by rob wormaldan angular developer advocateit shares reduxs core fundamentals but uses rxjswhich implements the observer pattern in js and comes packaged with angularit follows the core principles of redux and is specifically designed for angularfundamental tenets of ngrx/storestate is a singleimmutable data structureactions describe state changespure functions called reducers take the previous state and the next action to compute the new statestate accessed with the storean observable of state and an observer of actionslets break this downthe following is a quickbut importantoverview of the basicsll go more indepth as we build our applicationactionsactions are information payloads that send data from the application to the reducerwhich updates the storeactions are the only way the store receives datain ngrx/storethe action interface looks like this// actions consist of type and data payloadexport interface action { typestringpayloadany}the type should describe the kind of state change we wantfor examplethis might be something likeadd_todoordecrementetcthe payload is the data being sent to the store in order to update itactions are dispatched to the store like so// dispatch action to update storestoredispatch{ typebuy milk}reducersreducers specify how the state changes in response to actionsa reducer is a pure function that describes state mutations in the app by taking the previous state and the dispatched action and returning the next state as a new objectgenerally using objectassign and/or spread syntax// reducer function specifies how the state// changes when an action is dispatchedexport const todoreducer =state = []action=>{ switchtype{ casereturn [statepayload]defaultreturn state}}its important to be mindful of purity when writing reducerspure functionsdo not mutate state outside their scopehave return values that depend only on their inputsgiven the same inputalways return the same outputyou can read more about purity hereit is the responsibility of the developer to ensure purity and state immutability in javascriptso make sure to be mindful when writing your reducersstorethe store holds the entire immutable state of the applicationthe store in ngrx/store is an rxjs observable of state and an observer of actionswe can use store to dispatch actionswe can also subscribe to observe and react to state changes over time with stores selectmethodwhich returns an observableangular with ngrx/storepet tags appnow that were familiar with the basics of how ngrx/store worksre going to build an angular app that allows users to customize a name tag for their petour app will have the following featuresusers can choose tag shapefont styletextand optional extrasusers will need to authenticate before creating a tagusers can see a simple preview of their tag as they build itwhen finishedusers can create another tag or log outll create several components to compose a tag builder and a tag previewll create components and routes for logging increating a tagand finishing upthe state of our tag builder app will be managed with ngrx/storell also use auth0 and route guards to protect our applicationourpet tags app will look like thislets get startedangular app setupinstall dependenciesmake sure you have nodejs with npm installedlts download recommendednextll install the angular cli for scaffolding and serving our apprun the following command to install angular-cli globally$ npm install -g @angular/clithis will install the latest version of the angular cli toolkeep in mind that the angular cli just came out of beta and is now into release candidates at the time of writingupdates are still to be expectedif you need to update your angular cli installation at any timerefer to the angular cli github readme herecreate an angular appin a directory of your choosingopen a command prompt and create a new angular app project$ ng new pet-tags-ngrxnavigate into your new /pet-tags-ngrx folder and install the necessary packages to support ngrx/storelike so$ cd pet-tags-ngrx$ npm install @ngrx/core @ngrx/store --savewe now have everything we need to get started on our appcustomize app boilerplatelets customize the generated angular boilerplate to better suit the application we want to buildcreate src/app/core folderfirstcreate the following foldersrc/app/coreour apps root component and core files will live heremove the appcomponent* files into this folderfor brevitythis tutorial will not cover testingwe will ignore all *spects filesif youd like to write testsplease do sootherwisethese files will not be mentioned again in this article and they have been removed from the source code in the github repository for simplicityupdate app modulenextopen the src/app/appmodulets filewe need to update the path to our appcomponent file since we just moved it into the src/app/core folder// src/app/apptsimport { appcomponent } from/core/apporganize assetsnavigate to the src/assets folderinside assetsadd a new folder called imagesleave this empty for nowll add some images latermove the src/stylescss file from the root folder into src/assetsmoving stylescss requires us to make a change toangular-clijsonopen this file and change the styles array as follows//styles[assets/stylescss]add bootstrap css to angular appfinallyll add bootstrap css to the indexhtml file in our appthis <link>tag was copied from the bootstrap cdnll only use the compiled css and not jswhile were at its update our apps <title>topet tags<-- indexhtml -->pet tags</title>-- bootstrap cdn -->link rel="stylesheet"href="https//maxcdnbootstrapcdncom/bootstrap/400-alpha6/css/bootstrapmincss"integrity="sha384-rwoiresju2yc3z8gv/npezwav56rsmlldc3r/azzgrngxqqknkkofvhfqhnuweyj"crossorigin="anonymous">/head>serve the appwe can serve our app on localhost and watch for changes with the following command$ ng servestart the server and navigate to http//localhost4200the app should look like thisapp componentnow were ready to start building out the features of ourpet tags angular applicationll start with the app* filesthis is our root componentchanges here will be minimaldelete app component csslets delete the appcss filewe wont need it since well only use bootstrap for styling this componentapp component typescriptwe also need to remove the reference to the deleted css file in the appwe can also delete the boilerplate title property from the appcomponent classour file should look like this// src/app/core/apptsimport { component } from@angular/core@component{ selectorapp-roottemplateurl/apphtmlexport class appcomponent {}app component template htmlnow lets add some html to the apphtml templatereplace the current contents of this file with the following-- src/app/core/appdiv class="container"row"col-sm-12"h1 class="text-center"/h1>/div>router-outlet>/router-outlet>ll use bootstrap styles to add a grid and a headingthen well add the <directivethis is where our views will render when we change routes in our single page appat this pointthe app will throw an error until we establish routing and page componentss do that nextcreate page componentsas mentioned beforeour app will have three routesa homepage with logina page where the user can create and preview a new pet tagand a completion page where the user can view their finished tag and log outs create these page components so we can set up routingll come back to each of them to build them outexecute the following commands from the root your pet-tags-ngrx project folder to generate the components$ ng g component pages/home$ ng g component pages/create$ ng g component pages/completethe ng g commandor its longformng generatecreates the necessary files and folders for angular componentsdirectivespipesand servicesit also imports components in appwe now have the scaffolding for our three page componentsso lets set up routingcreate a routing modulelets build a separate ngmodule to support routingcreate a new file in the src/app/core folder called app-routing// src/app/core/routing-moduletsimport { ngmodule } fromimport { routermodule } from@angular/routerimport { homecomponent } from/pages/home/homeimport { createcomponent } from//pages/create/createimport { completecomponent } from/pages/complete/complete@ngmodule{ imports[ routermoduleforroot[ { pathhomecomponent }{ pathcreatecreatecomponent }completecompletecomponent }**redirecttopathmatchfull} ]providers[]exports[ routermodule ]}export class approutingmodule {}we now have our three routes/createand /completepage not found errors will redirect back to the homepagenext lets open our main app module fileappand add the new approutingmodule to imports like soimport { approutingmodule } from/core/app-routing{importsapproutingmodule ]we now have routing set upwe should be able to navigate in the browser by entering the urls defined in the approutingmoduleour homecomponent now renders in the <when were on the homepagehomepage componentthe homecomponent will simply have a message and a login button for unauthenticated visitorsif a user is already logged intheyll be sent to the /create route insteadinitiallyll set up our components without authenticationafter the primary features of our ngrx/store app are builtll add auth0 authentication and a route guardfor nows add a message and a placeholder button that takes the user to the /create pageopen the homehtml template and replace the boilerplate content with the following markup-- src/app/pages/home/homecol-sm-12 text-center"p class="lead"please sign up or log in to create aname tag for your beloved pet/p>p>button class="btn btn-lg btn-primary"routerlink="/create"log in</button>at the momentthelog inbutton simply navigates to http4200/createlaterll update it to authenticate the user before going to the create pageour homepage now looks like thispet tag modelnow its time to start implementing ourtag builder and state managementthe first thing well do is create a model for our statewe want this model to represent the currentpet tagcreate a new filesrc/app/core/pet-tagmodel// src/app/core/pet-tagtsexport class pettag { constructorpublic shapepublic fontpublic textpublic clipbooleanpublic gemspublic complete{ }}export const initialtagpettag = { shapefontsans-serifclipfalsegemsfalse}the class declares the shape of the pettag typethese are the required properties and type annotations for our applicationspet tag state objectnext we want to export a constant called initialtagthis constant declares the values in the default state objectll use this to initialize state as well as reset itpet tag actionsnow were ready to build an actions creator for our action typesrecall that actions are dispatched to a reducer to update the storell declare an action for each kind of modification we want to make to the storecreate the following fileactionstsexport const select_shape =select_shapeexport const select_font =select_fontexport const add_text =add_textexport const toggle_clip =toggle_clipexport const toggle_gems =toggle_gemsexport const complete =export const reset =resetre defining actions as constantsalternativelywe could construct injectable action classes as done in the ngrx/example-appfor our small demo appthis can contribute to indirectionso well keep it simplepet tag reducernow lets build our reducer function that will take actions and update the storereducertsimport { action } from@ngrx/storeimport { pettaginitialtag } from/core/pet-tagimport { select_shapereset } from/pet-tagexport function pettagreducerpettag = initialtag{ case select_shapereturn objectassign{}{ shapepayload }case select_font{ fontcase add_text{ textcase toggle_clip{ clipclip }case toggle_gems{ gemsgems }case complete{ completecase resetinitialtag}}first we import action from ngrx/storethen we need the pettag model and its default statewe also need to import the actions we created in the previous stepnow well create our pettagreducerfunctionthe reducer accepts previous state and the dispatched action as argumentsremember that this is a pure functioninputs determine outputs and the function does not modify global statethis means that when we return anything from the reducerit either needs to be a new object or it can output an unmodified inputsuch as in the default casell use objectto return new objects containing the values from source objects in most casesthe sources will be the previous state and objects containing the action payloadthe toggle_clip and toggle_gems actions toggle booleans that are assigned in the initialtag statethereforewe dont need a payload when we dispatch these actionswe can simply set the value to its opposite in these casesre sending a payload with the complete action because we want to explicitly set it to trueand only do so once for each tag createdwe could use a toggle for this as wellbut for clarityll dispatch a specific value as a payload insteadnotice that the reset case uses the imported initialtag objectbecause initialtag is a constantusing it here does not interfere with the reducers purityimport store in app modulewe now have actions and a reducer functionwe need to tell our application to use the store and reduceropen the appts file and update the followingimport { storemodule } fromimport { pettagreducer } fromstoremoduleprovidestore{ pettagpettagreducer }we can now implement state management with store updates in our applicationbuilding thepageour createcomponentwhich we initialized earlier for routingis going to be a smart componentit will have several dumb child componentssmart and dumb componentssmart componentsalso known as container componentsare generally root level componentsthey contain business logicmanage state and subscriptionsand handle eventsin our applicationthey are the routable page componentscreatecomponent is a smart component and will implement the logic for ourtag builderit will handle events emitted by several dumb child components that make up the tag builderdumb componentsalso known as presentational componentsrely only on the data they are given by parent componentsthey can emit events that are then handled in the parentbut they do not utilize subscriptions or stores directlydumb components are modular and reusablewe will use a tag preview dumb component on both the create page and the complete pagecreatecomponent and completecomponent will be smart componentspage featuresour create page will have the following featurestag shape selectortag font style selector and tag text fieldoptions to include a clip and add gemspreview of the tags shape and textadonebutton that finalizes thetagcomponent typescriptlets start with the createcomponent classopen the create// src/app/pages/create/createtsimport { componentoninitondestroy } fromimport { observable } fromrxjs/observableimport { subscription } fromrxjs/subscriptionimport { store } fromcomplete } fromimport { pettag } fromapp-createexport class createcomponent implements oninitondestroy { tagstate$observable<pettag>private tagstatesubscriptionsubscriptionpettagdone = falseconstructorprivate storestore<{ thistagstate$ = storeselect} ngoninittagstatesubscription = thistagstate$subscribe{ thispettag = statethisdone =shape &&} ngondestroytagstatesubscriptionunsubscribe} selectshapehandlershapestore{ typeshape }} selectfonthandlerfonttypefonttype }} addtexthandlertext }} togglecliphandlertoggle_clip }} togglegemshandlertoggle_gems }} submittrue }}}this smart component contains the logic for customizing a pet tagll import oninit and ondestroywhich will initialize and clean up our store subscriptionll also need observable and subscription from rxjs and store from ngrx/storeour actions will be dispatched from this componentll need to import most actions from the actions creatorwith the exception of resetfinallyll import our pettag modelt be needing any css for this componentso ive removed the css file and reference to itin the createcomponent classtagstate$ is a pettag-shaped observablein the constructorll use the ngrx/store method selectto set thistagstate$ to the state observableour ngoninitlifecycle hook will set up the subscription to the tagstate$ observablethis will set the pettag property to the state returned by the observable stream each time a new state is pushedthe done property will check for a selected shape and textthese are the two properties of a pet tag that must have truthy values in order for the tag to befully customizedthe ngondestroylifecycle hook then cleans up the subscription when the component is destroyedll create the event handler functions that dispatch actions to the storethese handlers will be executed when the child dumb components emit events to update thetag stateeach handler uses the storemethod to send the desired action type and payload to our reducerin a more complex appyou may wish to dispatch actions in an actions creator service that can be injected into your componentshoweverfor our small app and for learning purposesthis is unnecessaryso we will dispatch actions directly from our smart components using constants from our actions creatorpet-tagasidecode lintingangulars cli comes with code linting in the form of the codelyzer packageyou can lint your project at any time by running the following command$ ng lintlets take the opportunity to lint our pet tags app nowif any errors are foundcorrect them before proceedingits good practice to lint periodically throughout development to maintain clean codethe linting configuration can be found at tslintjson in your projecttag shape componentnow well build our first presentational componenttagshapecomponentre finished with this componentthe create page should look like thiss generate the scaffolding for this child component with the following angular cli command$ ng g component pages/create/tag-shapethe tag shape component will display four different images with possible shapesa bonea rectanglea circleand a heartthe user can select which shape theyd like for theirdownload all foursvg images from the github repository herepet-tags-ngrx/src/assets/images/place them in your local pet-tags-ngrx/src/assets/images foldertag shape component typescriptnextopen the tag-shape// src/app/pages/create/tag-shape/tag-shapeoutputeventemitter } fromapp-tag-shape/tag-shapestyleurls]}export class tagshapecomponent { tagshape@outputselectshapeevent = new eventemitter{ } selectshapeselectshapeeventemit}}add output and eventemitter to the @angular/core importsour tag shape selector will use radio buttonsll need a property to store the shape ngmodelthe shape name options are stringsll set tagshapes type annotation to stringnext we need an @outputdecorator to emit an event when the user selects a shapethis will send the information to the parent createcomponentthe selectshapemethod will emit the event with shape informationthe parent can then handle this event with the selectshapehandlermethod we created earlier in createcomponentll hook this up to the parent shortlytag shape component templatebefore thatll add the necessary template markup for our tagshapecomponentmodify the tag-shapehtml file as shown-- src/app/pages/create/tag-shape/tag-shapeh3>shape</h3>form-text text-muted"choose a tag shape to get startedlabel class="tagshape col-sm-3"img src="/assets/images/bonesvg"input type="radio"name="shape"ngmodel]="tagshape"change="selectshapetagshape"value="bone"/label>/assets/images/rectanglerectangle"/assets/images/circlecircle"/assets/images/heartheart"ll create radio options for each of our four shapes along with their imageswhen any input is selectedll use theevent to fire our method emitting the selectshapeevent with the tagshape as its argumenttag shape component stylesthis component could use a little bit of styling beyond bootstrapso add the following to the tag-shape/* src/app/pages/create/tag-shape/tag-shapecss */host { displayblockmargin20px 0tagshape { padding10pxtext-aligncenter}img { displayheightauto0 automax-height50pxmax-width100%width}notehost pseudo-class selector targets the components host elementapp-tag-shape>in this caseadd tag shape component to the create pagefinallyll implement our tagshapecomponent by adding it to the our smart createcomponent templatehtml file and replace the boilerplate markup with the following-- src/app/pages/create/createcol-sm-12 text-center lead"hellocreate a customized tag for your petselectshapehandler$event/app-tag-shape>our parent component is now listening for the selectshapeevent from the tag shape child and handling it by executing the selectshapehandlermethod we created in our createcomponent class earlierif you recallthat method dispatches the select_shape action to the store and looks like this{ this{ typeshape }}our app now updates state when the user selects a shape for theirtag text componentnext well create a child component that lets the user choose a font style and enter the text theyd like on their pet tagour create page will look like this once weve added our tag text componentgenerate the component scaffolding with the following command$ ng g component pages/create/tag-texttag text component typescriptnow open the tag-text// src/app/pages/create/tag-text/tag-textapp-tag-text/tag-textexport class tagtextcomponent { tagtextinput =fonttype =selectfontevent = new eventemitteraddtextevent = new eventemitter{ } selectfontselectfontevent} addtextaddtextevent}}this component works the same way as our tagshapecomponentso it looks very similarll import output and eventemitter and create properties for the tagtextinput and fonttype based on user inputswe arent adding string type annotations to our properties because declaring initial values allows types to be inferred automaticallyll emit events when the user updates the tag text or changes the font style selectiontag text component templateour tag text component templatetag-textshould look like this-- src/app/pages/create/tag-text/tag-texttext<select your desired font style and enter your pet's namebr>you can see what your tag will look like in the preview belowform-group row"label for="font"class="col-sm-2 offset-sm-2 col-form-label"select id="form-control col-sm-6"fonttype"selectfontoption value="sans-serif"sans-serif</option>serif"serif</select>tagtext"input id="type="text"tagtextinput"inputaddtexttagtextinputmaxlength="8"/>re using a <select>element and a text input field to let the user choose options for theirthe ngmodels are updated on user input and events are emitted to the parent componenttag text component styleswell add just one ruleset to tag-text/* src/app/pages/create/tag-text/tag-text}add tag text component to the create pagefinallywe need to add the tagtextcomponent to the create pageapp-tag-text *ngif="selectfonthandleraddtexthandler/app-tag-text>notice that were adding an *ngif structural directive to the <app-tag-text>elementwe only want this component to appear once the user has selected a shapethis is because were going to create a preview of the tag soonand it doesnt make sense to show a preview unless a shape has already been selectedthis prevents users from entering text or extra tag options before choosing a shapell listen for tagtextcomponent to emit the selectfontevent and addtextevent events and handle them with the methods we added to createcomponent earlierwhich dispatch the select_font and add_text actions and payloads to the reducerfonttype }}addtexthandlertext }}tag extras componentnow well let the user choose whether they want a few extras for theironce weve implemented the tag extras componentour create page will look like thiscreate the scaffolding for tagextrascomponent with this command$ ng g component pages/create/tag-extrastag extras component typescriptopen tag-extras// src/app/pages/create/tag-extras/tag-extrasapp-tag-extras/tag-extrasexport class tagextrascomponent { tagcliptoggleclipevent = new eventemittertogglegemsevent = new eventemitter{ } togglecliptoggleclipevent} togglegemstogglegemsevent}}this should look very familiar by nowextrasare options to include a tag clip or gems with our pet tagso they are boolean values serving as ngmodels for checkboxestag extras component templateadd the necessary markup to the tag-extras-- src/app/pages/create/tag-extras/tag-extrasextras<select any extras you would like to addcol-sm-4 offset-sm-2"label>input type="checkbox"tagclip"toggleclipinclude tag clip <col-sm-4"gems"togglegemsadd gems <ll use checkboxes to let the user choose whether theyd like to add extrastag extras component styleswe want to add a bottom border to our host element since this is the last component before well show the customized tag previewadd the following to the tag-extras/* src/app/pages/create/tag-extras/tag-extrashost { border-bottom1px solid #cccdisplaypadding-bottom20px}add tag extras component to the create pagelets add the tag extras component to createhtml like soapp-tag-extras *ngif="togglecliphandlertogglegemshandler/app-tag-extras>like the tag text componentll only display the extras if the user has already selected a shapethe toggleclipevent and togglegemsevent events are handled by the createcomponent methods we created earlier to dispatch the toggle_clip and toggle_gems actions to the reducertoggle_clip }}togglegemshandlertoggle_gems }}since these are boolean togglesno payloads are necessaryrecall that we set up the reducer to use the previous state to determine the next state in these casestag preview componentnow lets create a component that shows a simple preview of the pet tag as its being createdafter weve implemented the tag preview presentational componentll be able to view the tag like sos scaffold the tagpreviewcomponentthis component will be a child of both the create and complete pagess create it in the root of the app folder like so$ ng g component tag-previewtag preview component typescriptopen tag-previewts and add this code// src/app/tag-preview/tag-previewonchangesinput } fromapp-tag-preview/tag-previewexport class tagpreviewcomponent implements onchanges { @inputimgsrc =tagcliptextgemstext{ } ngonchangesimgsrc = `/assets/images/${thisshape}svg`tagcliptext = thisbooltotextgemstext = this} private booltotextbool{ return boolyesno}}tagpreviewcomponent is a dumb component that takes input from the createcomponent parent and displays itbut does not produce any outputsimport the input decorator and the onchanges lifecycle hookwe also need the pettag model so we know what shape to expect from the inputthe tagpreviewcomponent class needs to implement onchanges so we can take advantage of the ngonchangesngonchangesexecutes each time changes to the components inputs are detectedll need this in order to update our preview whenever the user modifies theirthe @inputpettag that well be receiving from the parent component is the state objectwhich has the shape declared by the pettag model we defined at the beginningit might look something like this{ shapeboneseriffawkestruefalse}we want to display this data in a user-friendlyvisual wayll do this by showing an image of the tag with user-inputted text and notes about whether the user has chosen to include a clip or gemsll set the image source as well as the tag clip and gems option textwhen changes to the input are detectedthe input is provided by createcomponents subscription to its tagstate$ store observabletag preview component templateopen the tag-previewhtml file and add-- src/app/tag-preview/tag-previewdiv *ngif="row tagview-wrapper"tagview {{pettagshape}}"img [src]="imgsrc"text {{pettagfont}}"{{pettagtext}} <strong>tag clip/strong>{{tagcliptext}}<{{gemstext}} <the preview will show if there is a shapell display the appropriate shape svg image and a shape classll also display the pet tag text in the appropriate font using a class with the font valuell print out whether the user has chosen to include a tag clip or gemstag preview component stylesrecall that there are four possible tag shapesrectanglecircleand heartin order to display a nice preview with any of these shapesll need some additional stylingopen the tag-preview/* src/app/tag-preview/tag-previewtagview-wrapper { padding-toptagview { height284pxpositionrelativetext { font-size48pxabsolutetext-shadow1px 1px 0 rgba2558top99px74px85pxsans-serif { font-familyarialhelveticaserif { font-familygeorgiatimes new romantimes}after some basic styling to position the preview elementsll set the font sizes based on shape and the font families based on the users selected font stylenow our <app-tag-preview>is ready to be added to the container component templatesadd tag preview component to the create pageopen createhtml and lets add the tag preview child component at the bottomapp-tag-preview [pettag]="pettag"/app-tag-preview>square bracketsdenote one-way binding syntaxwe already established our local pettag property in the createcomponents tagstatesubscriptionand were passing this to the tag preview componentnow we should be able to see live changes in the tag preview as we customize our tagsubmit completed tagnow that we have our tag builder and preview builts add abutton to submit the finished tag to the complete pagewhen implementedour create page should look like the followingve already created a submitmethod in createcomponent that dispatches the complete action and payload to the reducerall we need to do is create a button that calls this method in our create*ngif="preview your customized tag aboveif you're happy with the resultsclick the button below to finishbtn btn-success btn-lg"[disabled]="done"clicksubmit/complete"done<ll disable the button if the done property is falseywe declared done in the createcomponents tagstatesubscription earlierit looks like thisthe tag can be considered ready for submission if it has a shape and textif the user has added thesethen they will be able to click the button to submit their tagll also route the user to the complete pagepage componentwe scaffolded the complete page when we set up the main routes for our appve implemented the componentthe complete page will look something like this after the user has created acomponent typescriptnow lets open the completets smart component and implement the following code// src/app/pages/complete/completeimport { reset } fromapp-complete/completeexport class completecomponent implements oninit} newtagreset }}}completecomponent is a routable smartcontainerll be managing a store subscriptionso we need to import oninitondestroyobservableand storell also have a link the user can click to start over and create a new tagthis will set the state back to its initial valuesso we need to import the reset actionas well as pettag and initialtag from our modelthis component doesnt need any styling beyond bootstrapll delete the completecss file and remove the reference to itlike in our createcomponent smart componentll create a tagstate$ observableand a local pettag propertyll also create an emptytag property with the pettag typell set its value to initialtagll assign tagstate$ as the store observablethen in ngoninitll subscribe to the observable and set the pettag propertyin the ngondestroyll clean up our subscribtion by unsubscribingour newtagmethod will dispatch the reset actionresetsthe application state so that a new tag can be customizedcomponent templateour completecomponents html template will look like this-- src/app/pages/complete/completecomplete"col-sm-12 alert alert-success"congratulationsyou've completed a pet id tag for <text}}<would you like to <anewtagalert-link"create another/a>app-tag-preview [pettag]="col-sm-12 alert alert-danger"oopsyou haven't customized a tag yeta routerlink="click here to create one nowfirst well show a success alert that congratulates the user on creating atag for their petgrabbing the pets name from the pettag state objects textll provide a link to create another tag that executes the newtagmethod and routes the visitor back to the create page to start freshll show the tag preview component and pass the pettag object to itapp-tag-preview [pettag]=ll need to show an error message if the user manually navigates to the /complete route without having finished customizing a taga link should be available to take them back to the create pagethe complete page error should look like thiswe now have the primary functionality of ourpet tags application set up and workingauthentication with auth0well now protect our application so that only authenticated users can access itwe already set up abutton in our homecomponentbut right now it just navigates to the create pages hook up the authentication functionality using auth0sign up for auth0the first thing well need is an auth0 accountfollow these simple steps to get startedsign up for a free auth0 accountin your auth0 dashboardcreate a new clientname your new app and selectsingle page web applicationsin the settings for your newly created appadd http4200 to the allowed callback urls and allowed originscorsd likeyou can set up some social connectionsyou can then enable them for your app in the client options under the connections tabthe example shown in the screenshot above utilizes username/password databasefacebookgoogleand twitterset up dependenciesauth0 authenticates using json web tokenss install the angular2-jwt helper library using npm$ npm install angular2-jwt --savewe also need the auth0 lock librarythis provides the login widget and methodsll include the cdn-provided script for lock in the <head>of our indexhtml file-- src/app/index-- auth0 -->script src="//cdnauth0com/js/lock/10110/lockjs"/script>create an auth servicenext well create a service to manage authenticationuser authentication will be handled via local storage and it wont be necessary to create another storein a more complex applicationyou may wish to make a user storebut for our purposesa simple service will work just fines create an authentication service$ ng g service core/authour authservicets file should look like this// src/app/core/authtsimport { injectable } fromimport { router } fromimport { tokennotexpired } fromangular2-jwt// avoid name not found warningsdeclare var auth0lockdeclare var localstorage@injectableexport class authservice { lock = new auth0lock[client_id][client_domain]{ auth{ redirecturlhttpresponsetypetoken} }userprofileobjectprivate routerrouteruserprofile = jsonparselocalstoragegetitemprofile// add callback for lockhash_parsedevent thislockonauthresult{ ifauthresult &idtoken{ localstoragesetitemid_token// get user profile thisgetprofileerror{ if{ throw errorthere was an error retrieving profile data} localstoragestringifyuserprofile = profile// on successful authentication and profile retrievalgo to /create route thisnavigate} else if{ // authentication failedshow lock widget and log a warning thisloginconsolewarn`there was an error authenticating${authresult}`} }} loginshow} logout{ localstorageremoveitem} get authenticatedboolean { // search for an item in localstorage with key ==return tokennotexpired}}well import router to handle redirection after login and tokennotexpiredfrom angular2-jwtto make sure our user still has a valid jwtto avoid typescript warningswe need to declare types for auth0lock and localstoragell be able to inject our authservice wherever we need access to its properties and methodsiein other componentsin the authservice classwe need to create a new lock instance with our auth0 clients id and domainthese can be found in your auth0 dashboard settings for the single page application client you just set upreplace [client_id] and [client_domain] with your personalized informationll pass a configuration object to our lock instance with a redirecturl and responsetypeyou can read more about lock configuration in the docsll create a property to store the users profile information that well retrieve when a visitor authenticatesthis has an object typebecause well be storing the users profile and access token in local storagell do in our constructor is check for an existing profileif theres a profile in storage alreadyll set the userprofile propertynext we need to listen to the lock instance for the hash_parsed eventthis is a low-level event that well useinstead of the authenticated eventin order to handle single page app redirection upon loginif an idtoken is presentll save it to localstorage and use it to retrieve the users profile informationonce the profile has been successfully retrievedwe can save it to localstorage and redirect to the create pageif there is no idtoken returnedll reinitialize the login and log an authentication warningll implement three methodslogoutand the authenticated accessorthe loginmethod will simply display the lock widget so the user can log in with auth0the logoutmethod removes the users token and profile from local storagethe authenticated getter checks the jwt to see if it has expired and returns a boolean representing authentication statusre now ready to use authservice to authenticate users in our applicationprovide auth service in app modulewere going to provide authservice globally to the application in our appimport { authservice } from/core/auth[ authservice ]import authservice in the app module and add it to the providers arraywe can now inject this service elsewhere in our applicationhome component loginthe first thing well implement with authservice is thebutton we created on the homepageopen home// src/app/pages/home/homepublic authauthservice{ } ngoninit{ ifauthauthenticated} }}import authservice and router and make them available to the constructorauth should be public because we need to access its methods in the home templateusing the oninit lifecycle hookll check if the user is authenticatedif soll navigate to the create page so the user can skip the login on the homepagenow open home-- src/app/home/homell update thebutton so that clicking it executes the authservice loginmethod and shows the auth0 lock login boxwe now have a functioning login in our appcomplete component logoutwe also need a way for our users to log outll add alog outbutton to the complete page componentopen complete{ }ll import authservice and make it publicly available to the constructor so we can access its properties and methods in the html templatenext open complete{{authname}}a class="btn btn-danger btn-lg"/"log out<ll greet the user by name and add abutton to call the authservices logoutmethod and redirect the user back to the homepagegreet user in create componentfor a personal touchll also greet the user by name on the create pageopen createimport the authservice and make it publicly available in the constructornext open the createhtml template and add a personalized greeting after hellonow our app feels more personalizedcreate a route guardwe can log in and out of our appbut that doesnt offer much more than simple personalization at the momentany visitor can still navigate to any route they wish if they simply enter urls manuallys implement a route guard so that routes are activated only for logged in usersimportant security notein our simple demo appauthentication is simply for routing because we dont have a server componentclient-side authentication does not confer security featuresre building an authenticated app with a serveryoull need to authorize api requests with the jwt provided by auth0 using an authorization headeryou can read more on how to do this in the auth0 angular 2 calling apis docsthe angular2-jwt package we installed provides auth_providers to help accomplish thiswhen making api calls in an authenticated appwe would secure our server requests in addition to implementing presentational route guardsyou can read more about securing a node api in the angular 2 authentication tutorial herecreate a new file in src/app/core called authguardimport { routercanactivate } from/authexport class authguard implements canactivate { constructorprivate auth{ } canactivate{ return true} thisreturn false}}we need to inject the route guard in our routing moduleso we need to import injectablell also need router to redirect the user when theyre not authenticatedand canactivate to activateor deactivateroutes based on user authentication statusll import authservice to get this authentication informationthats it for importsthe authguard class implements canactivatea guard which determines if a route can be activated or notll make authservice and router available privately to the constructorour canactivatemethod checks if the user is authenticatedif they arethe route can be activated so well return truell redirect to the home page so the user can log in and return falsethe route cannot be activatedapp routing module with route guardnow that weve created a route guardwe need to apply it in our applications open the app-routingts file and make some updates// src/app/core/app-routingimport { authguard } fromcreatecomponentcanactivate[ authguard ] }completecomponent[ authguard ]first we need to import our authguardll add the canactivate[ authguard ] key/value to each route that we want to protectthis includes theroute and theroutewe need to add authguard to the providers arrayunauthorized users can no longer access routes that require authenticationtrying to access protected routes when not logged in redirects visitors to the homepage where theyll see thebuttondont forget to run $ ng lint if you havent been doing so and make sure there are no issues with our codeconclusionour simple angular + ngrx/store + auth0 application is now completetry it outyou might not need ngrx/storestate management libraries are greatbut please make sure youve read you might not need redux before you implement ngrx/store in a production angular applicationour tutorials sample app is reasonably simple because were using ngrx/store for teaching and learningwhen building production apps for yourself or clientsconsider the necessity and ramifications of using a tool like redux or ngrx/store before implementingangularwith its inclusion of rxjsnow does a great job of managing global data with servicessmallersimpler apps work just fine with local statein these casess possible to introduce confusion and indirection if ngrx/store is used unnecessarilythat saidngrx/store and its kin are incredibly helpful and valuable tools when managing state in large or particularly complex applicationshopefully youre now able to reason about the paradigm used by redux and ngrx/storethis should help you make informed decisions regarding how and when to use state management librariesadditional state management resourceshere are some additional resources for learning how to manage state with storesngrx/store on github@ngrx/store in 10 minutescomprehensive introduction to @ngrx/storeng-confreactive angular 2 with ngrx - rob womaldangular 2 service layersreduxrxjs and ngrx store - when to use a store and whygetting started with redux - dan abramov on eggheadiowhile angular makes it reasonably straightforward to share and pass data in smaller apps with services and component communicationmanaging global application state can rapidly become a mess and a headache in complex appsglobal stores like ngrx/store greatly aid in organizing and compartmentalizing state managementre now prepared to tackle building your own angular apps with ngrx/store", "image" : "https://cdn.auth0.com/blog/angular/logo.png", "date" : "March 07, 2017" } , { "title" : "Easily Migrate Your Existing Stormpath Users to Auth0 ", "description" : "Stormpath is shutting down Aug 18, 2017. Learn how to migrate your existing Stormpath users to Auth0 without requiring your users to reset their passwords.", "author_name" : "Ado Kukic", "author_avatar" : "https://s.gravatar.com/avatar/99c4080f412ccf46b9b564db7f482907?s=200", "author_url" : "https://twitter.com/kukicado", "tags" : "migration", "url" : "/how-to-migrate-your-existing-stormpath-users-to-auth0/", "keyword" : "tldr stormpath announced today that it was acquired by oktaas a resultthe stormpath api will be shutting down this coming august as the team transitions to oktacustomers have until august 182017 to export their user data to okta or a different providerfind out how to easily migrate your users to auth0 without requiring your users to reset their passwords and some additional benefits youll gain by making the switchif you would like to follow along with our demodownload the sample app from githubstormpath is an authentication as a service company that allows developers to offload their authentication and authorization needs to a third partythe company offered a restful api that customers could use to manage identity in their applicationstodaythe company announced that it had been acquired by oktaanother company that provides identity management servicesacquisitions in the software industry are a normwhat is surprising about this acquisition is that the stormpath product will be shut down later this year as the team transitions to okta and many customers will have to find an alternative2017 to find a new providerget it up and runningand export their existing usersthis could be a challenging amount of unexpected work in such a short time framestormpath customers have until august 182017 to migrate off the platform as it is being shut downtweet this at auth0our goal is to provide the best authentication and identity management solution that is also simple and easy for developers to work with@auth0 should open user migration to everyone from @gostormpath and gain all those clients that can't move to okta#wearesorry&mdashtom compagno@tomcompagnomarch 62017you askwe deliverwere offering the database migration feature for free for all stormpath customers 💥database migration made easy with auth0the most important thing you are probably concerned with right now is how to migrate your existing users with minimal impact to your applicationsat auth0 we hope to greatly reduce your stress and anxiety with our painless user import functionalitythe way this feature works is by setting up adatabase connection and connecting it to your stormpath accountwhen your users login the first timethey will enter their existing stormpath credentials andif authenticated successfullywe will automatically migrate that user account from stormpath into auth0your users will not have to change their password or jump through any additional hoops and you can decide what data to port over from stormpathnext time the user logs inauth0 will detect that they have been migrated and authenticate them with their auth0 accounttalk is cheapso let me actually walk you through the stepsimplementing the database migration scriptsfirst of all you will need an auth0 accountsignup for free herewith your account createdlets setup adatabase connectionin your auth0 management dashboardnavigate to the database connections sectionyou can name your connection anything you likeleave all the default settings as is for now and click the create button to create the connectionnexts go into this database connection and connect it to our stormpath accountclick on your newly created connection and navigate to thedatabase tabflip the switch titleduse my own databaseand the database action scripts section will be enabledthis is where we will write our code to connect to your existing stormpath user datastorewe will need to write two scriptslogin and get userlogin will proxy the login process and get user will manage looking up accounts when a user attempts to reset their passwordwith ourdatabase feature turned ons enable the import functionalityby default thedatabase connection will allow us to authenticate with an external databaseif we want to migrate users from the external platform into auth0 well need to simply toggle a switchgo to the settings tab of the connection and flip the switch titledimport users to auth0and youre doneone final step well do before implementing our scripts is enabling this connection for our default clientnavigate to the clients tab while you are in your database connection and flip the switch to enable this client for the default connectionif you already have an existing auth0 accountthe connection name may be differentloginthe login script is executed when a user attempts to sign in but their account is not found in the auth0 databasehere we will implement the functionality to pass the user credentials provided to our stormpath user datastore and see if that user is validauth0 provides templates for many common databases such as mongodbmysql and sql serverbut for stormpath we will have to write our ownwe will utilize stormpaths rest api to authenticate the users look at the implementation belowfunction loginusernamepasswordcallback{ // replace the your-client-id attribute with your stormpath id var url =https//apistormpathcom/v1/applications/{your-client-id}/loginattempts// stormpath requires the user credentials be passed in as a base64 encoded message var message = username ++ passwordvar pass = new buffermessagetostringbase64// here we are making the post request to authenticate a user request{ urlurlmethodpostauth{ // your api client id user{stormpath-client-id}// your api client secret password{stormpath-client-secret}}headers{content-typeapplication/jsonjson{ typebasic// passing in the base64 encoded credentials valuepass } }functionerrorresponsebody{ // if response is successful well continue ifstatuscode== 200return callback// a successful response will return a url to get the user information var accounturl = bodyaccounthref// well make a second request to get the user infothis time it will be a get request request{ urlaccounturlget{ // your api client id user// your api client secret password} }erroruserinforesponseuserinfobodyuserinfo{ // if we get a successful responsell process it ifvar parsedbody = jsonparse// to get the user identifierll strip out the stormpath api var id = parsedbodyreplacecom/v1/accounts/// finallyll set the data we want to store in auth0 and migrate the user return callbacknull{ user_ididparsedbodyemail// we set the users email_verified to true as we assume if they were a valid // user in stormpaththey have already verified their email // if this field is not setthe user will get an email asking them to verify // their account email_verifiedtrue// add any additional fields you would like to carry over from stormpath }}get userthe get user script is executed when the user attempts to do a password reset but their account is not found in the auth0 databasethe get user script interfaces with your stormpath datastore and checks to see if the user exists thereif the user does existtheir data is sent back to auth0 where the user is automigrated and a password reset email is sent out from auth0once the user confirms the resetthey are good to go and can access your appsubsequent logins will be authenticated against the auth0 database as the users profile is now stored with auth0s look at our implementation of the get user script for stormpathfunction getbyemailcom/v1/applications/{your-client-id}/accountsrequestqs{ qemail } }{ ifvar user = parsedbodyitems[0]ifuservar id = user{ user_idemail_verified// add any additional fields you would like to carry over from stormpath }}with these two scripts we have user migration setup and ready to goto test it and make sure our code workss build a simple application that allows a user to login and request protected resources via an apill build the frontend with angular 2 and the backend well power with springbuilding the frontendwe will build our frontend with angular 2ll use the auth0 angular 2 quickstart to get up and running quicklyour source code can be found hereauth0 provides a comprehensive set of quickstartssdksand guides for many popular languages and frameworkssee them all herewith the project downloadedll need to setup our auth0 credentialsll do that in the authconfigjs fileopen the file and change the values to look like thisuse strictexportsmyconfig = { // your auth0 clientidclientid{auth0-client-id}// your auth0 domain domain{your-auth0-domain}auth0comboth of these values can be found in your auth0 management dashboardin the dashboardsimply click on the clients link from the main menuand select the default client that was created when you signed upif you already had an auth0 accountselect the client that has the database connection with thedatabase enabledwith these values configured save the file and run npm installonce npm has installed all the required dependenciesrun the project by executing npm startnavigate to localhost3000 to see the app in actionclick on the login button to login to your applicationclicking the login button will bring up the auth0 lock widget and ask the user to provide their email and passwordherethe user will provide their stormpath email and password credentials and if they are correct they will be logged inif you dont already have a stormpath user account you can login withgo into your stormpath dashboard and create an accountnow login with your stormpath user credentialsnotice that you are instantly logged inif we look at the response data from the transaction well see that the user is coming from the stormpath-users connection alongside other data that we importeds make sure that this user was migrated to auth0 as wellto check this well navigate to the users section of the auth0 dashboard and well now see the user we logged in withthis means that our migration was successfulthis user is now migrated to auth0the next time they login to the applicationll check their credentials against auth0s database instead of making the extra call to stormpaththe workflow diagram below illustrates the process once againnow you may notice the two links call public api and call private apis build a simple backend that will return data when these links are clickedll do that nextbuilding the backendfor our backendll build a simple spring boot application that exposes a restful apiyou can get the code for the sample application hereto setup the applicationyou will just need to update the application with your credentialsthe file where the credentials are stored is called auth0properties and can be found in the src/main/resources/ directoryedit the file to look like sodomain{your-auth-domain}comauth0issuer//{your-auth0-domain}com/auth0{your-auth0-client-id}auth0securedroutenot_usedauth0base64encodedsecretfalseauth0authoritystrategyrolesauth0defaultauth0apisecurityenabledsigningalgorithmhs256with this update in placeyou should be able to build the application by runningmvn spring-bootrun -drunarguments=--auth0secret=your_secret_keyif the application was built successfullyyou will be able to access the api at localhost4000the two routes that are exposed by this application that we care about are /public and /securethe /public route will be accessible by everyonewhile the /secure route will return a successful response only if the user is authenticated and passes the correct credentialsonce your backend is up and running go back to your frontend application and try clicking on the the two links call public api and call private apithe public api you will be able to access even when not logged infor the private apiyou will need to be logged in to call the route and get the appropriate responsewe also used angular 2 to add some dynamic classesso if the user is logged in well make both of the buttons green to indicate they can be clickedgo further with auth0i hope the user migration functionality i showed in this post helps with your use casethis gradual migration works great because it is transparent to your end-usersas the deadline approaches and stormpath prepares to shut down their serviceyou may need to speed up the migration processauth0 can help here as wellyou can bulk import your existing user datastore into auth0 or since we already wrote the get user script you can send out a mass email to your users letting them know they need to change their password and by clicking on the link in the email their accounts will be migrated to auth0now that your migrates woes have been taken care ofs briefly talk about what auth0 brings to the table besides authentication and authorizationmany features that auth0 provides can be enabled with the flip of a switchmultifactor authentication is one such featureyou can enable mfa using our in-house mfa solutionguardianwith just the flip of a switchif you are already using a 3rd party mfa solution or have your ownsolutionyou can continue to use it as wellthe auth0 rules extensibility platform allows you to take control of the authorization workflowhere you can configure any number of events such as triggering 3rd party mfaperforming progressive profilingand much morewe want to make your switch to auth0 as painless as possibleso we are making the database migration feature free for all existing stormpath customersto help you get up and running faster we are also giving existing stormpath customers 8 hours of professional services at no costconclusionstormpath will be shutting down their authentication and authorization apis this coming august2017 to move off the platformat auth0we hope to give existing stormpath customers an easy and smooth transition planour database migration feature can start migrating your users todayif you are affected by the stormpath news and want to easily migrate your usersgive auth0 a trysign up for a free account and get started today", "image" : "https://cdn.auth0.com/blog/migrate-stormpath-users/stormpath_logo.png", "date" : "March 06, 2017" } , { "title" : "An Introduction to Ethereum and Smart Contracts: Bitcoin & The Blockchain", "description" : "Learn about Bitcoin and the genius behind the blockchain concept as we delve into Ethereum", "author_name" : "Sebastián Peyrott", "author_avatar" : "https://en.gravatar.com/userimage/92476393/001c9ddc5ceb9829b6aaf24f5d28502a.png?size=200", "author_url" : "https://twitter.com/speyrott?lang=en", "tags" : "ethereum", "url" : "/an-introduction-to-ethereum-and-smart-contracts/", "keyword" : "bitcoin took the world by surprise in the year 2009 and popularized the idea of decentralized secure monetary transactionsthe concepts behind ithowevercan be extended to much more than just digital currenciesethereum attempts to do thatmarrying the power of decentralized transactions with a turing-complete contract systemread on as we explore how it worksethereum marries the power of decentralized transactions with turing-complete contractstweet this this is part 1 of a 3 post seriesintroductionbitcoin and the double-spending problemin 2009someoneunder the alias of satoshi nakamotoreleased this iconic bitcoin whitepaperbitcoin was poised to solve a very specific problemhow can the double-spending problem be solved without a central authority acting as arbiter to each transactionto be fairthis problem had been in the minds of researchers for some time before bitcoin was releasedbut where previous solutions were of research qualitybitcoin succeeded in bringing a workingproduction ready design to the massesthe earliest references to some of the concepts directly applied to bitcoin are from the 1990sin 2005nick szaboa computer scientistintroduced the concept of bitgolda precursor to bitcoinsharing many of its conceptsthe similarities between bitgold and bitcoin are sufficient that some people have speculated he might be satoshi nakamotothe double-spending problem is a specific case of transaction processingtransactionsby definitionmust either happen or notadditionallysomebut not alltransactions must provide the guarantee of happening before or after other transactionsin other wordsthey must be atomicatomicity gives rise to the notion of orderingtransactions either happen or not before or after other transactionsa lack of atomicity is precisely the problem of the double-spending problemspendingor sending money from spender a to receiver bmust happen at a specific point in timeand before and after any other transactionsif this were not the caseit would be possible to spend money more than once in separate but simultaneous transactionswhen it comes to everyday monetary operationstransactions are usually arbitrated by bankswhen a user logs-in to his or her home banking system and performs a wire transferit is the bank that makes sure any past and future operations are consistentalthough the process might seem simple to outsidersit is actually quite an involved process with clearing procedures and settlement requirementsin factsome of these procedures consider the chance of a double-spending situation and what to do in those casesit should not come as a surprise that these quite involved processesresulting in considerable but seemingly impossible to surmount delayswhere the target of computer science researchersthe blockchainsothe main problem any transactional system appliead to finance must address ishow to order transactions when there is no central authorityfurthermorethere can be no doubts as to whether the sequence of past transactions is validfor a monetary system to succeedthere can be no way any parties can modify previous transactionsavetting processfor past transactions must also be in placethis is precisely what the blockchain system in bitcoin was designed to addressif you are interested in reading about systems that must reach consensus and the problems they facethe paper for the byzantine generals problem is a good startalthough at this point the concept of what a blockchain is is murkybefore getting into details about itlets go over the problems the blockchain attempts to addressvalidating transactionspublic-key cryptography is a great tool to deal with one of the problemsvalidating transactionspublic-key cryptography relies on the asymmetrical mathematical complexity of a very specific set of problemsthe asymmetry in public-key cryptography is embodied in the existance of two keysa public and a private keythese keys are used in tandem for specific purposesin particulardata encrypted with the public-key can only be decrypted by using the private-keydata signed with the private-key can be verified using the public-keythe private-key cannot be derived from the public-keybut the public-key can be derived from the private-keythe public-key is meant to be safely shared and can usually be freely exposed to anyoneof interest for creating a verifiable set of transactions is the operation of signing datas see how a very simple transaction can be verified through the use of public-key cryptographys say there is an account holder a who owns 50 coinsthese coins weere sent to him as part of a previous transactionaccount holder a now wants to send these coins to account holder bband anybody else who wants to scrutinize this transactionmust be able to verify that it was actually a who sent the coins to bthey must be able to see b redeemed themand noone elseobviouslythey should also be able to find the exact point in timerelative to other transactionsin which this transaction took placeat this point we cannot do thiswe canfortunatelydo everything elsefor our simple examples say the data in the transaction is just an identifier for the previous transactionthe one that gave a 50 coins in first placethe public-key of the current owner and the signature from the previous ownerconfirming he or she sent those coins to a in first place{previous-transaction-idfedcba987654321owner-pubkey123456789abcdefprev-owner-signatureaabbccddeeff112233}the number of coins of the current transaction is superfluousit is simply the same amount as the previous transaction linked in itproof that a is the owner of these coins is already therehis or her public-key is embedded in the transactionnow whatever action is taken by a must be verified in some wayone way to do this would be to add information to the transaction and then produce a new signaturesince a wants to send money to bthe added information could simply be bs public-keyafter creating this new transaction it could be signed using as private-keythis proves aand only awas involved in the creating of this transactionin javascript based pseudo-codefunction atobprivatekeyaprevioustransactionpublickeyb{ const transaction = {hashpublickeyb }transaction[] = signtransactionreturn transaction}an interesting thing to note is that we have defined transactions ids as simply the hash of their binary representationa transaction id is simply its hashusing anat this pointunspecified hashing algorithmthis is convenient for several reasons we will explain later onfor nowit is just one possible way of doing thingss take the code apart and write it down step-by-stepa new transaction is constructed pointing to the previous transactionthe one that holds as 50 coinsand including bs public signaturenew transaction = old transaction id plus receivers public keya signature is produced using the new transaction and the previous transaction owners private keythats itthe signature in the new transaction creates a verifiable link between the new transaction and the old onethe new transaction points to the old one explicitly and the new transactions signature can only be generated by the holder of the private-key of the old transactionthe old transaction explicitly tells us who this is through the owner-pubkey fieldso the old transaction holds the public-key of the one who can spend itand the new transaction holds the public-key of the one who received italong with the signature created with the spenderif this seems hard to grasp at this pointthink of it this wayit is all derived from this simple expressionthere is nothing more to itthe spender simply signs data that saysi am the owner of transaction id xxxi hereby send every coin in it to band anybody elsecan check that it was awho wrote thatto do sothey need only access to awhich is available in the transaction itselfit is mathematically guaranteed that no key other than as private-key can be used in tandem with aso by simply having access to as public-key anyone can see it was a who sent money to bthis makes b the rightful owner of that moneyof coursethis is a simplificationthere are two things we have not consideredwho said those 50 coins where of as propertyordid a just take ownership of some random transactionis he or she the rightful ownerand when exactly did a send the coins to bwas it before or after other transactionsif you are interested in learning more about the math behind public-key cryptographya simple introduction with code samples is available in chapter 7 of the jwt handbookbefore getting into the matter of orderings first tackle the problem of coin genesiswe assumed a was the rightful owner of the 50 coins in our example because the transaction that gave a his or her coins was simply modeled like any other transactionit had as public-key in the owner fieldand it did point to a previous transactionsowho gave those coins to awhats morewho gave the coins to that other personwe need only follow the transaction linkseach transaction points to the previous one in the chainso where did those 50 coins come fromat some point that chain must endto understand how this worksit is best to consider an actual caseso lets see how bitcoin handles itcoins in bitcoin were and are created in two different waysfirst there is the unique genesis blockthe genesis block is a specialhardcoded transaction that points to no other previous transactionit is the first transaction in the systemhas a specific amount of bitcoinsand points to a public-key that belongs to bitcoin creator satoshi nakamotosome of the coins in this transaction were sent to some addressesbut they never were really used that muchmost of the coins in bitcoin come from another placethey are an incentiveas we will see in the next section about ordering transactionsthe scheme employed to do this requires nodes in the network to contribute work in the form of computationsto create an incentive for more nodes to contribute computationsa certain amount of coins are awarded to contributing nodes when they successfully complete a taskthis incentive essentially results in special transactions that give birth to new coinsthese transactions are also ends to links of transactionsas well as the genesis blockeach coin in bitcoin can be traced to either one of these incentives or the genesis blockmany cryptocurrency systems adopt this model of coin genesiseach with its own nuances and requirements for coin creationin bitcoinper designas more coins get createdless coins are awarded as incentiveeventuallycoin creation will ceaseordering transactionsthe biggest contribution bitcoin brought to existing cryptocurrency schemes was a decentralized way to make transactions atomicbefore bitcoinresearchers proposed different schemes to achieve thisone of those schemes was a simple voting systemto better understand the magic of bitcoins approachit is better to explore these attemptsin a voting systemeach transaction gets broadcast by the node performing itto continue with the example of a sending 50 coins to ba prepares a new transaction pointing to the one that gave him or her those 50 coinsthen puts bs public-key in it and uses his or her own private-keysto sign itthis transaction is then sent to each node known by a in the networks say that in addition to a and bthere are three other nodescdenow lets imagine a is in fact a malicious nodealthough it appears a wants to send b 50 coinsat the same time a broadcasts this transactionit also broadcasts a different onea sends those same 50 coins to cconst atob = {// b}const atoc = {00112233445566// cnote how previous-transaction-id points to the same transactiona sends simultaneously this transaction to different nodes in the networkwho gets the 50 coinsworseif those 50 coins were sent in exchange for somethinga might get goods from b and c although one of them wont get the coinssince this is a distributed networkeach node should have some weight in the decisions consider the voting system mentioned beforeeach node should now cast a vote on whether to pick which transaction goes firstnode vote a a to b b a to b c a to c d a to c e a to b each node casts a vote and a to b gets picked as the transaction that should go firstthis invalidates the a to c transaction that points to the same coins as a to bit would appear this solution worksbut only superficially sos see whyfirsts consider the case a has colluded with some other nodedid e cast a random vote or was it in some way motivated by a to pick one transaction over the otherthere is no real way to determine thissecondlyour model does not consider the speed of propagation of transactionsin a sufficiently large network of nodessome nodes may see some transactions before othersthis causes votes to be unbalancedit is not possible to determine whether a future transaction might invalidate the ones that have arrivedeven moreit is not possible to determine whether the transaction that just arrived was made before or after some other transaction waiting for a voteunless transactions are seen by all nodesvotes can be unfairsome node could actively delay the propagation of a transactionlastlya malicious node could inject invalid transactions to cause a targeted denial of servicethis could be used to favor certain transactions over othersvotes do not fix these problems because they are inherent to the design of the systemwhatever is used to favor one transaction over the other cannot be left to choiceas long as a single nodeor group of nodescanin some wayfavor some transactions over othersthe system cannot workit is precisely this element that made the design of cryptocurrencies such a hard endeavora strike of genius was needed to overcome such a profound design issuethe problem of malicious nodes casting a vote in distributed systems is best known as the byzantine generals problemalthough there is mathematical proof that this problem can be overcome as long as there is a certain ratio of non-malicious nodesthis does not solve the problem for cryptocurrenciesnodes are cheap to addthereforea different solution is necessaryphysics to the rescuewhatever system is used to ensure some transactions are preferred over othersno node should be able to choose which of these are with 100% certaintyand there is only one way one can be sure this is the caseif it is a physical impossibility for the node to be able to do thisso no matter how many nodes a malicious user controlsit should still be hard for him or her to use this to his or her advantagethe answer is cpu powerwhat if ordering transactions required a certain amount of workverifiable workin such a way that it would be hard to perform initiallybut cheap to verifyin a sensecryptography works under the same principlescertain related operations are computationally infeasible to perform while others are cheapencrypting data is cheap next to brute-forcing the encryption keyderiving the public-key from the private-key is cheapwhile it is infeasible to do it the other way aroundhashing data is cheapwhile finding a hash with a specific set of requirementsby modifying the input datais notand that is the main operation bitcoin and other cryptocurrencies rely on to make sure no node can get ahead of otherson averages see how this workss define what a block isa block is simply a group of transactionsinside the blockthese transactions are set in a specific order and fulfill the basic requirements of any transactionan invalid transactionsuch as one taking funds from an account with no fundscannot be part of a blockin addition to the transactionsa block carries something called proof-of-workthe proof-of-work is data the allows any node to verify that the one who created this block performed a considerable amount of computational workno node can create a valid block without performing an indefinite but considerable amount of workwe will see how this works laterbut for now know that creating any block requires a certain amount of computing power and that any other node can check that that power has been spent by whomever created the blocks go back to our previous example of a malicious nodedouble-spending 50 coins by trying to create to two separate transactions at the same timeone sending money to b and the other to cafter a broadcasts both transactions to the networkevery node working on creating blockswhich may include apick a number of transactions and order them in whichever way they preferthese nodes will note that two incompatible transactions are part of the same block and will discard onethey are free to pick which one to discardafter placing these transactions in the order they choseeach node starts solving the puzzle of finding a hash for the block that fits the conditions set by the protocolone simple condition could befind a hash for this block with three leading zeroesto iterate over possible solutions for this problemthe block contains a special variable field known as thenonceeach node must iterate as many times as necessary until they find the nonce that creates a block with a hash that fits the conditions set by the protocolthree leading zeroessince each change in the nonce basically results in a random output for a cryptographically secure hash functionfinding the nonce is a game of chance and can only be sped up by increasing computation powereven thena less powerful node might find the right nonce before a more powerful nodedue to the randomness of the problemthis creates an interesting scenario because even if a is a malicious node and controls another nodefor instanceany other node on the network still has a chance of finding a different valid blockthis scheme makes it hard for malicious nodes to take control of the networkstillthe case of a big number of malicious nodes colluding and sharing cpu power must be consideredan entity controlling a majority of the nodesin terms of cpu powernot numbercould exercise a double-spending attack by creating blocks faster than other nodesbig enough networks rely on the difficulty of amassing cpu powerwhile in a voting system an attacker need only add nodes to the networkwhich is easyas free access to the network is a design targetin a cpu power based scheme an attacker faces a physical limitationgetting access to more and more powerful hardwaredefinitionat last we can attempt a full definition of what a blockchain is and how it worksa blockchain is a verifiable transaction database carrying an orderedof all transactions that ever occurredtransactions are stored in blocksblock creation is a purposely computationally intensive taskthe difficulty of creation of a valid block forces anyone to spend a certain amount of workthis ensures malicious users in a big enough network cannot easily outpass honest userseach block in the network points to the previous blockeffectively creating a chainthe longer a block has been in the blockchainthe farther it is from the last blockthe lesser the probability it can ever be removed from itthe older the blockthe more secure it isone important detail we left in previous paragraphs is what happens when two different nodes find different but still valid blocks at the same timethis looks like the same problem transactions hadwhich one to pickin contrast with transactionsthe proof-of-work system required for each block lets us find a convenient solutionsince each block requires a certain amount of workit is only natural that the only valid blockchain is the one with most blocks in itthink about itif the proof-of-work system works because each block demands a certain amount of workand timethe longest set of valid blocks is the hardest to breakif a malicious node or group of nodes were to attempt to create a different set of valid blocksby always picking the longest blockchainthey would always have to redo a bigger number of blocksbecause each node points to the previous onechanging one block forces a change in all blocks after itthis is also the reason malicious groups of nodes need to control over 50% of the computational power of the network to actually carry any attackless than thatand the rest of the network will create a longer blockchain fastervalid blocks that are valid but find their way into shorter forks of the blockchain are discarded if a longer version of the blockchain is computed by other nodesthe transactions in the discarded blocks are sent again to the pool of transactions awaiting inclusion into future blocksthis causes new transactions to remain in an uncofirmed state until they find their way into the longest possible blockchainnodes periodically receive newer versions of the blockchain from other nodesit is entirely possible for the network to be forked if a sufficiently large number of nodes gets disconnected at the same time from another part of the networkif this happenseach fork will continue creating blocks in isolation from the otherif the networks merge again in the futurethe nodes will compare the different versions of the blockchains and pick the longer onethe fork with the greater computational power will always winif the fork were to be sustained for a long enough period of timea big number of transactions would be undone when the merge took placeit is for this reason that forks are problematicforks can also be caused by a change in the protocol or the software running the nodesthese changes can result in nodes invalidating blocks that are considered valid by other nodesthe effect is identical to a network-related forkasidea perpetual message system using webtasks and bitcoinalthough we have not delved into the specifics of how bitcoin or ethereum handle transactionsthere is a certain programmability built into thembitcoin allows for certain conditions to be specified in each transactionif these conditions are metthe transaction can be spentethereumon the other handgoes much furthera turing-complete programming language is built into the systemwe will focus on ethereum in the next post in this seriesbut for now we will take a look at creative ways in which the concepts of the blockchain can be exploited for more than just sending moneyfor thiswe will develop a simple perpetual message system on top of bitcoinhow will it workwe have seen the blockchain stores transactions that can be verifiedeach transaction is signed by the one who can perform it and then broadcast to the networkit is then stored inside a block after performing a proof-of-workthis means that any information embedded in the transaction is stored forever inside the blockchainthe timestamp of the block serves as proof of the messages dateand the proof-of-work process serves as proof of its immutable naturebitcoin uses a scripting system that describes steps a user must perform to spend moneythe most common script is simplyprove you are the owner of a certain private-key by signing this message with itthis is known as thepay to pubkey hashscriptin decompiled form it looks like<sig>pubkey>op_dup op_hash160 <pubkeyhash>op_equalverify op_checksigwhere <and <are provided by the spender and the rest is specified by the original sender of the moneythis is simply a sequence of mixed data and operationsthe interpreter for this script is a stack-based virtual machinethe details of execution are out of scope for this articlebut you can find a nice summary at the bitcoin wikithe important take from this is that transactions can have data embedded in them in the scriptsthere exists a valid opcode for embedding data inside a transactionthe op_return opcodewhatever data follows the op_return opcode is stored in the transactionthere is a limit for the amount of data allowed40-bytesthis is very littlebut still certain interesting applications can be performed with such a tiny amount of storageone of them is our perpetual message systemanother interesting use case is theproof of existenceconceptby storing a hash of an asset in the blockchainit serves as proof of its existence at the point it was added to a blockthere already exists such a projectthere is nothing preventing you from using our perpetual message system for a similar useyet other uses allow the system to prepare transactions that can only be spent after conditions are metor when the spender provides proof of having a certain digital assetof when a certain minimum number of users agree to spend itprogrammability opens up many possibilities and makes for yet another great benefit of cryptocurrencies in contrast with traditional monetary systemsthe implementationour system will work as an http servicedata will we passed in json format as the body of post requeststhe service will have three endpoints plus one for debuggingthe /new endpointit creates a new user using the username and password passed insample bodyidusernamepassword// password is not hashed for simplicity// tls is requiredtestnettrue // true to use bitcoins test network}the response is of the formaddress// a bitcoin address for the user just created}the /address endpointreturns the address for an existing user}the response is identical to the /new endpointthe /message endpointbroadcasts a transaction to the bitcoin network with the message stored in ita fee is usually required for the network to accept the transactionthough some nodes may accept transactions with no feesmessages can be at most 33 bytes longfee667messagetest}the response is either a transaction id or an error messagesample of a successful responsestatusmessage senttransactionid3818b4f03fbbf091d5b52edd0a58ee1f1834967693f5029e5112d36f5fdbf2f3}using the transaction id one can see the message stored in itone can use any publicly available blockchain explorer to do thisthe /debugnew endpointsimilar to the /new endpoint but allows one to create an user with an existing bitcoin private keyand addresstrue// true to use bitcoins test networkprivatekeywif// a private key in wif format// note testnet keys are different from livenet keys// so the private key must agree with the // value of thekey in this object}the response is identical to the /new endpointthe codethe most interesting endpoint is the one that builds and broadcasts the transaction/messagewe use the bitcore-lib and bitcore-explorers libraries to do thisgetunspentutxosfromthenutxos =>{ let inputtotal = 0utxosutxo =>{ inputtotal += parseintutxosatoshisreturn inputtotal >= reqbodyifinputtotal <req{ res402sendnot enough balance in account for feereturn} const dummyprivatekey = new bitcoreprivatekeyconst dummyaddress = dummyprivatekeytoaddressconst transaction = bitcoretodummyaddress0changeadddata`${messageprefix}${reqmessage}`signaccountbroadcastuncheckedserializebody =>{ ifwebtaskcontextsecretsdebug{ resjson{ statustostringdummyprivatekeywifdummyprivatekeytowif} else { resbody }} }error =>500error{ resthe code is fairly simplegets the unspent transactions for an addressithe coins availablethe balancebuild a new transaction using the unspent transactions as inputpoint the transaction to a newempty addressassign 0 coins to that addressdo not send money unnecessarilyset the feeset the address where the unspent money will get sent backthe change addressadd our messagebroadcast the transactionbitcoin requires transactions to be constructed using the money from previous transactionsthat iswhen coins are sentit is not the origin address that is specifiedrather it is the transactions pointing to that address that are included in a new transaction that points to a different destination addressfrom these transactions is subtracted the money that is then sent to the destinationin our casewe use these transactions to pay for the feeeverything else gets sent back to our addressdeploying the examplethanks to the power of webtasksdeploying and using this code is a piece of cakefirst clone the repositorygit clone git@githubcomauth0-blog/ethereum-series-bitcoin-perpetual-message-examplegitnow make sure you have the webtask command-line tools installednpm install -g wt-cliif you havent done soinitialize your webtask credentialsthis is a one time processwt initnow deploy the projectcd ethereum-series-bitcoin-perpetual-message-examplewt create --name bitcoin-perpetual-message --metawt-node-dependencies={bcryptjs243bitcore-lib1319bitcore-explorers-bitcore-lib-011-3appjsyour project is now ready to testuse curl to try it outcurl -x post https//wt-sebastian_peyrott-auth0_com-0runwebtaskio/bitcoin-perpetual-message/new -dtrue }-hcontent-typeapplication/jsonmopyghmw5i7ryiq5pfdrqft4gvbus8g3no} # this is your bitcoin addressyou now have to add some funds to your new bitcoin addressif you are on bitcoins testnetyou can simply use a faucetfaucets are bitcoin websites that give free coins to addressesthese are easy to get for the testnetfor thelivenetyou need to buy bitcoins using a bitcoin exchangenow send a messageio/bitcoin-perpetual-message/message -d}now you can look at the transaction using a blockchain explorer and the transaction idif you go down to the bottom of the page in the link before you will see our message with a prefix wtmsgthis will get stored in the blockchain forevertry it yourselfthe webtask at httpsio/bitcoin-perpetual-message/ is liveyou will need to create your own account and fund itthoughyou can also get the full code for this example and run itconclusionblockchains enable distributedverified transactionsat the same time they provide a creative solution to the double-spending problemthis has enabled the rise of cryptcurrenciesof which bitcoin is the most popular examplemillions of dollars in bitcoins are traded each dayand the trend is not giving any signs of slowing downbitcoin provides a limited set of operations to customize transactionsmany creative applications have appeared through the combination of blockchains and computationsethereum is the greatest example of thesemarrying decentralized transactions with a turing-complete execution environmentin the next post in the series we will take a closer look at how ethereum differs from bitcoin and how the concept of decentralized applications was brought to life by it", "image" : "https://cdn.auth0.com/blog/Ethereum1/logo-2.png", "date" : "March 06, 2017" } , { "title" : "Cloudpets Data Breach Affects Over 820,000 Customers", "description" : "An unsecured database allowed hackers to steal personal information from over 820,000 Cloudpets customers. Learn how this may affect you and what to do next.", "author_name" : "Ado Kukic", "author_avatar" : "https://s.gravatar.com/avatar/99c4080f412ccf46b9b564db7f482907?s=200", "author_url" : "https://twitter.com/kukicado", "tags" : "security", "url" : "/cloudpets-data-breach/", "keyword" : "spiral toys is a company that creates toys for childrenit has an internet-connected product called cloudpets which allows parents and children to record and send voice messages to each other through a mobile appon january 7hackers discovered that the database the company was using to store data for this product was unsecuredhackers took control of the databasedeleted all informationand demanded a payment to restore the datathe database contained information for 820000+ users containing emailsbcrypt hashed passwordsand links to voice recordings customers and their children had made which could now be publicly accessedadditional information stored included picturesnamesbirthdaysand relationshipscustomers were not notified that their data had been compromisedtroy hunt wrote an excellent article covering this incidentsome highlights include that the database used was publicly accessible and did not even require a password to accesscloudpets was also notified at least four times that their database was exposed and the reporters never heard back from the companyfinallystaging and test databases were also discovered which had production data that could have also been compromisedaside from the devops failure to secure the database properlypassword requirements for user accounts were non-existantalthough the passwords were stored as bcrypt hashestroy was able to use hashcat and find valid passwords such asqwepasswordand123456sourcetroy huntsince the database has been publicly exposed since at least december 252016it is safe to assume that many malicious parties have accessed and downloaded the datawe urge customers that have cloudpets accounts to change their passwords and monitor their other accounts for signs of malicious activitypersonal information security guideeven if you dont have a cloudpets accountit may be a good time to review our personal information security guide which has plenty of tips on securing your personal information onlinebest practices for choosing good passwordsand much moretop things to remember when it comes to choosing a good passworddont reuse the same password for multiple accountscombine alphanumericspeciallower and uppercase charactersyour password should be at least 10 characters longif possibleenable multifactor authentication for your accountauth0 can protect your users and appsmanaging identity is a complex and difficult taskat auth0our goal is to make identity simple for developersa recent feature we launched called breached password detection can help alert your users that their credentials have been compromised in a data breach when they login to your appwe are still working on getting and adding credentials from this breach to our database to better protect your usersthis feature helps your users stay safebut also protects your apps from malicious accessadditionallyauth0 meets the standards for various password strength requirementsprovides multifactor authenticationand moreif you want to make identity simple and secure for your applicationsgive auth0 a try", "image" : "https://cdn.auth0.com/blog/this-the-season-for-cyber-criminals/logo.png", "date" : "March 03, 2017" } , { "title" : "Create a Docker dashboard with TypeScript, React and Socket.io", "description" : "Let's create a functioning web-based dashboard for Docker!", "author_name" : "Steve Hobbs", "author_avatar" : "https://en.gravatar.com/userimage/3841188/bc8fc1f1ebb326d59bab456cac894bdf.jpeg", "author_url" : "http://twitter.com/elkdanger", "tags" : "reactjs", "url" : "/docker-dashboard-with-react-typescript-socketio/", "keyword" : "in this articlewe are going to use a few different technologies together to build something whichafter a bit more elaborationmight actually be usefulwe will be creating a web-based dashboard for a docker installation using a number of different frameworks and technologiesboth front-end and server-sideenabling some administrator to monitor running containersstart and stop existing containersand create new containers based on existing docker imagesthere is a wide scope for elaboration hereof coursebut ill leave that as an exercise for youthe readerhopefully this article will set you off on the right foot with a good overview of the relevant technologiesenabling you to add even more value to the productthe appthis is a quick preview of what the app looks like when its finishedits essentially a page that displays two lists of docker containersthose that are currently runningand those that are stoppedit allows the user to start and stop these containersas well as start a new container from an existing image by clicking thenew containerbuttonthe codeif you want to explore the finished product as a referencefinished as far as the article is concernedthen you can fork the code on githubcomtechnology stacklets have a look at exactly what were going to be usingand whyill go through the prerequisites and installation requirements in a bitnodewe will use this to write our server-side code in javascript to run it on our machineand serve up our website to our usersdockerthis uses container technology to reliably run apps and services on a machinethe app interfaces with the docker daemon through the docker remote apimore on this latertypescriptthis allows us to add type safety to javascript and allows us to use modern javascript syntax in older browsersreactallows us to write the front-end of our application in isolated components in an immutablestate-driven waymixing html with javascriptsocketioprovides us with a way to communicate in real-time with the server and other clients using websocket technologygracefully degrading on older browserspeppered amongst the main technologies mentioned above are various libraries which also provide a lot of value during development timeexpressjsused to serve our web applicationwebpack 2to transpile our typescript assests into normal javascriptbootstrapto provide something decent looking - a problem i know all of us programmers endurethere are a few more minor onesbut i will cover those as we come to themprerequisitesdockeras this is going to be a slick-looking dashboard for dockerwe need to make sure we have docker installedif you dont alreadyhead to dockercom and download the latest version of the client for your operating systemif youve never heard of or used docker beforedont worry about it too muchbut it might be worth following through their getting started tutorial for mac or windows or linuxto make sure your docker installation is up and runningopen up a command prompt and typedocker -vyou should see some version information repeated back to youmine says docker version 1125build 7392c3bif you cant see this or you get an errorfollow through the installation docs again carefully to see if you missed anythingkeep the command prompt open - youre going to need ita note about the docker toolboxthe article was written assuming that you have the docker native tools installedif you happen to have the older docker toolbox installed then the docker api may not work for you straight out of the boxre in this situationyou may need to perform some additional steps to enable the api with docker toolboxmany thanks to reader rick wolff for pointing this outnodejsto write our app and serve the web interface to the userwere going to use nodejsthis has a number of libraries and frameworks which will make the job very easy for usversion 631 was used to build the demo app for this articleso i would urge you to use the same version or later if you canas there are some language features that im using which may not be available in earlier versions of the frameworkyou can grab the 61 release from their websiteor simply grab the latest release from their main downloads pageyou can also use something like nvm if you want to mix and match your versions for different projectswhich is something i can recommend doingonce you have node installedopen up your command line and make sure its available by typingnode -vit should repeat the correct version number back to youalso check that npm is availableit should have been installed by the nodejs installerby typingnpm -vit should ideally be version 3 or greatertypescriptwe will need to install the typescript compiler for our application to workluckily we can do this through npmnow that we have npm installed from the previous stepwe can install typescript using the following commandnpm install -g typescriptthis will download the typescript compiler using the node package manager and make the tools available on the command-lineto verify that your installation has workedtypetsc -vwhich should again echo a version number back to youm using 2010webpack 2finallyinstall webpackwhich will allow us to package our javascript assets together and will effectively run our typescript compiler for usagainwe can do this through npmnpm install -g webpackthis has installed webpack into our global package repository on our machinegiving us access to thewebpacktoolsetting up the projectfirst of allcreate a folder somewhere on your machine to house the development of your docker dashboardand navigate to it in your command linell go through a number of steps to set this folder up for use before we start codingnextinitialise the nodejs project by typingnpm initthis will ask you a number of questions about the projectnone of which are terribly important for this demoexcept that the name must be all lower-case and contain no spacesonce that has finishedyou will be left with a packagejson file in your projectthis is the manifest file that describes your node project and all of its dependenciesand well be adding to this file shortlycreating the web servernextll get the basic web server up and running which will eventually serve our reactjs app to the userlets begin by installing expressjswhich will enable us to get this donenpm install --save expressexpress is a framework that provides us with an api for handling incoming http requestsand defining their responsesyou can apply a number of view engines for serving web pages back to the useralong with a whole host of middleware for serving static fileshandling cookiesand much morealasre simply going to use it to serve up a single html file and some javascript assetsbut at least it makes that job easycreate the file serverjs inside the root of your projectand add the code which will serve the html filelet express = requireexpresslet path = requirepathlet app = expresslet server = requirehttpserverapp// use the environment port if availableor default to 3000let port = processenvport3000// serve static files from /publicappusestaticpublic// create an endpoint which just returns the indexhtml pageappget/reqres=>sendfilejoin__dirnameindexhtml// start the serverserverlistenconsolelog`server started on port ${port}`noteyoure going to see a lot of new es6 syntax in this articlelike letconstarrow functions and a few other thingsre not aware of modern javascript syntaxs worth having a read up on some the new featurescreate an indexhtml file in the root of the project with the following content<doctype html>html>head>meta charset=utf-8>meta name=viewportcontent=width=device-widthinitial-scale=1title>docker dashboard</title>link rel=stylesheethref=https//maxcdnbootstrapcdncom/bootstrap/37/css/bootstrapmincsstype=text/css/head>body>div id=docker dashboard/div>script src=//codejquerycom/jquery-224jsintegrity=sha256-bbhdlvqf/xty9gja0dq3hiwqf8lacrtxxzkrutelt44=crossorigin=anonymous/script>7/js/bootstrap/body>/html>this simply gives us a basic template for the front page of our app - well be adding to this laterfinallys test it out to make sure its all working so farin the command linenode serverjsthe prompt should tell you that it has managed to start the site on port 3000browse there now and make sure we can see our default index pageif notcheck both the browser window and the console to see if node has spat out any useful errorsand try againkeeping a smooth development workflowright now when you make changes to the site you will be forced to stop and restart the node app to see your changes to nodejs code take effector re-run the webpack command whenever you make a change to your react componentswe can mitigate both of these by causing them to reload themselves whenever changes are madeto automatically reload your nodejs server-side changesyou can use a package called nodemonif you want to use this package from the command lineyou can do npm install -g nodemonthis will allow us to run our app in such a way that any changes to the server-side code will cause the web server to automatically restartby using nodemon serverwe only want to do this on our development machines thoughso we will configure our packagejson accordinglyto handle the recompilation of your react components automaticallywebpack has awatchoption that will cause it to re-run by itselfto do thisstart webpack using webpack --watch and notice that your javascript bundles will start recompiling automatically whenever you change your react componentsto have thes two things - nodemon and webpack - running togetheryou can either start them in two different console windowsor if youre using osx or linux you can run them from one console using this neat one-linernodemon serverjs &webpack --watchnote this wont work on windows systemsbut luckily there is a package for that called concurrently that you can use to achieve the same affectnpm install -g concurrentlyconcurrentlywebpack --watchwhile you can use these tools by installing them globallyfor our application were going to install these two things as development dependenciesand adjust our packagejson file with two commandsone to start the app normally without nodemonand a development script we can use to start both nodemon and webpack watchfirstlyinstall these two packages as development dependenciesnpm install -d nodemon concurrentlythen edit thescriptsnode of the packagejson file to look like the followingmain{startwebpack -p &&start-dev/node_modules/bin/concurrently}authorthe start scriptrun using npm startwill firstly compile your javascript assets using webpack and then run our app using nodethe -p switch causes webpack to automatically optimize and minimize our scriptsready for productionthe start-dev scriptrun using npm run start-devis our development modeit starts our webserver using nodemon and webpack inmodemeaning that both our server-side and client-side code will be automatically reloaded when something changesthanks to @omgimalexis for some suggestions in this areastarting some react and typescriptthe main body of our client application is going to be constructed using react and typescriptwhich means we need to spend a little more time setting up one or two more toolsonce we set up a workflow for compiling the first componentthe rest will easily follows have a look at how were going to structure our react componentsapp/--- components/--- apptsx--- containerlist--- dialogtrigger--- modal--- newcontainermodal--- indextsxthey will all be housed inside anfolderwith the smaller components inside acomponentssubfoldertsx is essentially an entry point into our client-side appit binds the react components to the html domtsx glues everything together - it arranges and communicates with the other components in order to present the interface to the user and allow them to interact with the applications set the project up to start compiling indextsxcreate theand then thefile inside of thatwith the following contentsimport * as react fromimport * as reactdom fromreact-domimport { appcomponent } from/components/appreactdomrenderappcomponent />documentgetelementbyidre using the excellent visual studio code youll notice that it will immediately start throwing up intellisense issuesmainly because it doesnt know whatand our application component isre going to use webpack and typescript to fix thatsetting up webpackwebpack will take all ourtsx fileswork out their dependencies based on the imported filesrun them through the typescript compiler and then spit out one javascript file that we can include on the main html pageit does this primarily by referencing a configuration file in the root of our projectso lets create that nextcreate the file webpackconfigjs in the root of your projectmoduleexports = { entry/app/indexoutput{ filenamebundle__dirname +/public/jsdevtoolsource-mapresolve{ extensions[webts] }{ loaders[ { test$/loaderts-loader} ] }}theres quite a bit in theres go through itthe entry key tells webpack to start processing files using the /app/indextsx filethe output key tells webpack where to put the output filesin the /public/js folder with the name bundlethe devtool keyalong with the source-map-loader preloader in the module sectiontells webpack to generate source mapswhich will come in very handy when trying to debug your javascript app laterthe resolve key tells webpack which extensions to pay attention to when resolving modulethe loaders section tells webpack what middleware to use when processing moduleshere we tell it thatwhenever webpack comes across a file with ats ortsx extensionit should use the ts-loader toolthis is the tool that processes a typescript file and turns it into regular javascriptthere is a lot more you can do with webpackincluding automatically splitting out common modules into a commonjs fileor including css files along with your javascriptbut what we have here is sufficient for our requirementsto get this to workwe still need to install the ts-loader and source-map-loader packagesnpm install --save-dev ts-loader source-map-loaderwe also need to install the react packages that we neednpm install --save-dev react react-domnextwe need install typescript into the projectwe have already installed it globally in the first section of this articleso we can simply link it innpm link typescripttypescript itself needs a configuration filewhich lives in the tsconfigjson file in the root of the projectcreate that nowwith the following contentcompileroptionsoutdirdist/sourcemaptruenoimplicitanycommonjstargetes5jsx}}the main parts of this configuration are the moduletarget and jsx keyswhich instruct typescript how to output the correct code to load modules in the right wayand also how to deal with the react jsx syntax correctlycovered laters see what state our webpack set up is in at the momentfrom the command linesimply type webpack to start compilationit should give you some stats about compile times and sizesalong with a few errorserror in124error ts2307cannot find module27306error ts2602jsx element implicitly has typeanybecause the global typeelementdoes not existtsxmodule not founderrorcannot resolvefileordirectory/components/app in /users/stevenhobbs/dev/personal/docker-dashboard/app @tsx 412-39essentiallyit still doesniss fix that nowinstalling typings for reactbecause weve told webpack that were going to handle the react and reactdom libraries ourselveswe need to tell typescript what those things arewe do that using type definition filesas you can see from the github repositorythere are thousands of filescovering most of the javascript frameworks youve heard ofthis is how we get rich typingcompile-time hints and intellisense while writing typescript filesluckilywe can also install them using npmto install themnpm install --save-dev @types/react @types/react-domnow try running webpack againthis time we get just one errortelling us that the/components/app module is missingcreate a skeleton file for now so that we can get it compilingand inspect the resultscreate the file app/components/apptsx with the following contentexport class appcomponent extends reactcomponent<{}{}>{ render{ returnh1>/h1>}}at the moment it does nothing except print outin a header tagbut it should at least compilell flesh this out much more later onfor now thoughyou should be able to run the webpack command again nowand have it produce no errorsto inspect what webpack has created for usfind the public/js folder and open the bundlell see thatwhile it does look rather obtuseyou should be able to recognise elements of your program in there towards the very bottomas normal javascript that can run in the browsers also rather largeas it also includes the react libraries and it will include even more by the time were finishedthe next thing to do is include this file in our html pageopen indexhtml and put a script tag near the bottomunderneath the bootstrap include-- add our bundle here -->/js/bundlenowyou should be at the point where you can run the site using node serverbrowse to http//localhost3000 and view the running websiteif you can seewritten using a large header fontthen youve successfully managed to get your webpack/typescript/react workflow workingcongratulationsnow lets flesh out the actual application a bit more and add some real valuecreating the componentswhat we have now is a server-side application which acts as the backbone of our react appnow that we have done all that setup and configurationwe can actually concentrate on creating the react components that will form the applications interfacelater onwe will tie the interface to the server using socketbut for now lets start with some react componentsto figure out what components we needs take another look at a screenshot of the applicationthis time with the individual react components highlightedthe dialogtrigger component displays a button which can trigger a bootstrap modal dialogthe containeritem component knows how to display a single docker containerincluding some info about the container itselfthe containerlist displays a number of containeritem componentsthere are two containerlist components here - one for running containersand one for stopped containersone additional component which is not shown in that screenshot is the modal dialog for starting new containersto start withs create the component to display a single containercreate a new file in /app/components called containerlistitemand give it the following contentimport * as classnames fromclassnamesexport interface container { idstring namestring imagestring statestring statusstring}export class containerlistitem extends reactcontainer{ // helper method for determining whether the container is running or not isrunning{ return thispropsstate ===running} render{ const panelclass = thisisrunningsuccessdefaultconst classes = classnamespanel`panel-${panelclass}`const buttontext = thisstopreturndiv classname=col-sm-3div classname={ classes }>panel-heading{ thisname }<panel-bodystatus{thisstatus}<br/>imageimage} <panel-footerbutton classname=btn btn-default{buttontext}</button>}}here we have defined a component that can render a single containerwe also declare an interface that has all of the properties about a container that wed want to displaylike its nameimage and current statuswe define thetype of this component to be a containerwhich means we can get access to all the container information through thisthe goal of this component is to not only display the current status of the componentbut also to handle the start/stop button - this is something well flesh out later once we get into the socketio goodnessthe other interesting this component can dois slightly alter its appearance depending on whether the container is running or notit has a green header when its runningand a grey header when its notit does this by simply switching the css class depending on the statusll need to install the classnames package for this to workalong with its typescript reference typingsto do thatdrop into the command line once morenpm install --save classnamesnpm install --save-dev @types/classnamesclassnames is not strictly necessarybut does provide a handy api for conditionally concatenating css class names togetheras we are doing heres create the containeritemlist componentwhich is in charge of displaying a wholeof these components togethercreate a new file in /app/components called containerlist with the following contentimport { containercontainerlistitem } from/containerlistitemexport class containerlistprops { containerscontainer[] titlestring}export class containerlist extends reactcontainerlistpropsdiv>h3>title}</h3>p>containerslength == 0no containers to show}</p>rowmapc =>containerlistitem key={cname} {c} />} <}}this one is a little simpler as it doesnt do too much except display a bunch of componentlistitems in athe properties for this component include an array of container objects to displayand a title for theif theof containers is emptywe show a short messageotherwisewe use mapto convert theof container types into containerlistitem componentsusing the spread operatorthec partto apply the properties on container to the componentwe also give it a key so that react can uniquely identify each container in them using the name of the containerseeing as that will be unique in our domainyou cant create two docker containers with the same namerunning or notso now we have a component to render a containerand one to render aof containers with a titles flesh out the app container a bit moredisplaying some containersback to appfirst we need to import our new containers into the moduleimport { containerlist } from/containerlistll create a couple of dummy containers just for the purpose of displaying something on screenll swap this out later with real data from the docker remote apiadd this inside the appcomponent classnear the topcontainer[] = [ { idnametest containersome imagestate{ idanother test containerstopped}]now we need to create some state for this application componentthe state will simply tell us which components are runningand which are stoppedll use this state to populate the two lists of containers respectivelyto this endcreate a new class appstate outside of the main application component to hold this stateclass appstate { containerscontainer[] stoppedcontainerscontainer[]}now change the type of the state on appcomponent so that typescript knows what properties are available on our stateyour appcomponent declaration should now look like thisappstate>{then create a constructor inside appcomponent to initialise our stateincluding giving it our mocked-up containerswe use lodash to partition our containerinto two lists based on the container statethis means that well have to install lodash and the associated typingsnpm install --save lodashnpm install --save-dev @types/lodashand then import the lodash library at the top of the fileimport * as _ fromlodashlodash is a very handy utility library for performing all sorts of operations on listssuch as sortingfiltering - and in our case - partitioningheres the constructor implementationconstructor{ superconst partitioned = _partitionthiscstate ==state = { containerspartitioned[0]stoppedcontainerspartitioned[1] }}now in our state we should have two lists of containers - those that are runningand those that arents replace the render method so that it takes our dummy containers and uses our components to represent them on the screen{ returnh1 classname=page-headercontainerlist title=containers={thiscontainers} />stopped containersstoppedcontainers} />}at this point you should have a basic dashboard setup with some dummy containers - lets have a lookmaking things dynamics have a look at the docker and socketio side of things nowand replace those dummy containers with some real datainstall dockerodea nodejs library that enables us to interact with the docker remote apinpm install --save dockerodenextinstall the libraries and associated typings for socketio - well be using this both on the server-side and the clientas a means of communicating between the twonpm install --save socketionpm install --save-dev @types/socketio @types/socketio-clientnowopen serverjs in the root of the project and import socketbinding it to the express server that weve already createdlet io = requirewe can also get a connection to the docker remote api at this pointthrough dockerodewe need to connect to the api differently depending on whether were on a unix system or a windows systems house this logic in a new module called dockerapijs in the root of the projectlet docker = requiredockerodelet iswindows = processplatform ===win32let options = {}ifiswindows{ options = { host1272375 }} else { options = { socketpath/var/run/dockersock}}moduleexports = new dockeroptionsnow we can include this in our serverjs file and get a handle to the api/dockerapire going to provide the client with a few methodsgetting aof containersstarting a containerstopping a containerand running a new container from an exiting images start with the containerwe need to listen for connectionswe can do this further down the serverjs scriptafter we start the web server on the line that begins serveronconnectionsocket =>{ socket{ refreshcontainersthis starts socketio listening for connectionsa connection will be made when the react app startsat least it will be when we put the code in a bit later onin order to send theof docker containerswe listen for themessage being sent from the socket that has connected to the serverin other wordsthe client app has requested theof containers from the servers go ahead and define the refreshcontainersmethodfunction refreshcontainers{ dockerlistcontainers{ alltrue}err{ ioemit}whenever we call refreshcontainersthe docker api will be used to retrieve theof all of the containers that exist on the current systemwhich will then send them all using themessage through socketnotice though that were sending the message through the main io object rather than through a specific socket - this means that all of the clients currently connected will have their container lists refreshedyou will see why this becomes important later in the articlemoving over to the main react componentwe should now be able to start picking up messages through socketio which indicate that we should display the containerfirstimport the socketio library and connect to the socketio serverimport * as io fromio-clientlet socket = ioconnectdelete the mocked-up containers that we had put in beforethen change the constructor so that we react to the messages being passed to us from socketio instead of using our mocked-up containerswe will also initialise the component state so that the containers are just empty liststhe component will populate them at some short time in the future when it has received the appropriate messages what the constructor looks like now[][] } socket{ const partitioned = _setstate{ containersmapcontainerpartitioned[1]}we listen for messages using ioand specify the message stringwhen our socket receives a message with this nameour handler function will be calledin this casewe handle it and receive aof container objects down the wirewe then partition it into running and stopped containersjust as we did beforeand then we set the state appropriatelyeach container from the server is mapped to our client-side container type using a function mapcontainerwhich is shown herecontainer { return { idid_chainnamesnstringsubstrvalue`${containerstate}${containerstatus}`image }}this is where we extract out properties such as the namestatus and so onany other properties that you want to include on the ui in the futureyou will probably read inside this functionso now we have the ability to react to socketio messages coming down the wirethe next thing to do is cause the server to send us the containerwe do this by sending amessage to the server using socketwhich will send all the connections a similarly-titled message back with the container datawe can send this message from the componentdidmount eventwhich is called on our component once it has beenmountedto the domcomponentdidmount}right nowyou should be able to start your app and have it display aof the running and stopped docker containers on your machinestarting containersbeing able to start and stop a container is merely an extension of what weve already accomplisheds have a look at how we can start a container when we click thewiring up the start buttonthe workflow were going to implement looks like thiswe are going to handle theclickevent of the start button from inside the react componentinside the click eventre going to send a message to the socket running on the serverthe server will receive the message and tell docker to start the appropriate containerwhen the container startsthe server will dispatch a message to all connections with a refresheds start with the buttonalter the button inside your containerlistitem component so that it handles the click event using a method called onactionbuttonclickbutton onclick={thisonactionbuttonclickbind} classname=next createthe onactionbuttonclick handler somewhere inside the same component{ socket{ idid }}here we post themessage to the socket along with the container idarmed with this informationll be able to tell docker which container to startyou might find that youll get an issue herebecause typescript doesnt know what socket is yetwe can fix that by importing socketio-client and connecting to the server socketat the top of the filethenconst socket = ionow everything should be fineto complete the features pop over to the server side and handle the incoming messagejs and add the following somewhere inside your socket connection handleralongside where you handle themessageargs =>{ const container = dockergetcontainerargs{ containerdatarefreshcontainers}}here we simply get a container from docker using the id that we get from the clientif the container is validwe call start on itonce start has completedwe call our refreshcontainers method that we already havethis will cause socketio to send our currentof containers to all the connected clientsstopping containersthe functionality for stopping containers that are running is done in much the same waywe send a message through socketio to the server with athe server stops the relevant container and then tells everyone to refresh their containeronce agains start on the component side of thingsin the previous sectionwe added a handler for thestart/stopbutton which tells socketio to send a message to start the containers tweak that a bit so that we can use it for stopping containers tooll just send the right message or not depending on whether the container is currently running or notso this handler now becomes{ const evt = thisevt}nextll handle the message on the serveradd a handler for this alongside the one we added in the previous section forthe code looks strikingly similar to the start codeexcept we stop a container instead of starting itif you run the app nowyou should be able to start and stop your containersperiodically refreshing container statebefore we head into the last sectionnow would be a good time to add a quick feature that will automatically refresh our container stateas awesome as our new docker dashboard iscontainers can be startedcreated and destroyed from a few different places outside of our appsuch as the command lineit would be nice to reflect these changes in our app tooa quick and easy way to achieve this is to simply read the container state every x secondsthen update our clientswe already have most of the tools to do thiss implement itback in serverjs in the server-side appadd a quick one-liner to send an updatedof docker containers every 2 secondsput this outside of the ioblocksetinterval2000once your app is runningdive into the command line and stop one of your containers using docker stop <container id or name>and you should see the container stop inside your dashboard toofurthermorethanks to the power of socketyou should be able to open your dashboard in multiple browsers and see them all update at the same timego ahead and try browsing your dashboard on your mobile device toostarting brand new containersin this final sectionre going to explore how we can start brand new containers from exiting docker imagesthis will involve a couple of new react componentsa bootstrap modal popup and some more interaction with socketio and the docker apis create the react componentsthere are 3 components involvedamodalcomponentwhich is a generic component for creating any modal dialoganew container modelwhich is based upon the generic modal component for showing the new container-specific uias well as handling validationadialog triggercomponent which is used to show a modal dialog component on the screencreating a generic modal popup componentlets start with the generic componentseeing as our modal for creating a new container will be based upon this onere making a generic component just as an exercise to show you how you can extend such a component for multiple usesfor examplelater you might go on to create a dialog to accept an image name that will be pulled from the docker hub - you could also base that modal upon this generic componentcreate a new file in thedirector called modaland begin by importing the relevant modulesdefine some properties that our modal can accept so that we can configure how it looks and worksinterface modalproperties { idstring titlestring buttontextstring onbuttonclickedbooleanundefined}we must take an id and a titlebut we can also accept some text for the button on the dialog and also a handler for the button clickso that we can define what happens when the user clicks the buttonremember that this component is designed to be used in a generic way - we dont actually know what the behaviour will be yets define the component itselfexport default class modal extends reactmodalproperties{ // store the html element id of the modal popup modalelementidstring constructor{ supermodalelementid = `#${thisid}` } onprimarybuttonclick{ // delegate to the generic button handler defined by the inheriting component ifonbuttonclicked{ if== false{ // use bootstraps jquery api to hide the popup $modalelementidhide} } } rendermodal fadeid={ thisid }>modal-dialogmodal-contentmodal-headerbutton type=classname=closedata-dismiss=aria-hidden=timesh4 classname=modal-titletitle }</h4>modal-bodychildren } <modal-footeronclick={thisonprimarybuttonclick} classname=btn btn-primarybuttontextok} <}}the component definition itself is mostly straightforward - we just render out the appropriate bootstrap markup for modal popupsbut we pepper it with valuessuch as the component titlewe also specify the client handler on the button as well as the button textif the component doesnt specify what the button text should bethe default valueis usedusing this line}most importantlythe component called thischildren for the modal bodyll see why this important in the next sectionbut basically it allows us to render other components that are specified as children of this componentmore on that lateralso note the onprimarybuttonclick handlerwhen the button is clickedit delegates control to whatever is using this componentbut it also inspects the return value from that callif false is returnedit doesnt automatically close the dialogthis is useful for later when we dont want to close the dialog in the event that our input isnt validone last thing before we move onwhen this component compilesll probably find that typescript will complain that it cant find $which is true since we havent imported itto fix thiswe need to simply install the typings for jquery so that it knows how to resolve that symbolyou will also need to install the types for twitter bootstrapso that it knows what the bootstrap-specific methods and properties arenpm install --save-dev @types/jquery @types/bootstrapcreating thedialogthis dialog will be defined by creating a new dialog component and wrapping the content in the generic dialog component that we created in the last sectionspecifing some things like the title and what happens when the user clicks the buttoncreate a new file for the component callednewcontainermodaldefine our importsimport modal from/modalnote that were importing our generic modal as modalallowing us to make use of it in this new modal component - more on that shortlys define some incoming propertiesand some state for our new componentonrunimagevoid}interface modalstate { imagenamestring isvalidboolean}for the propertieswe allow an id for the component to be set - this will make sense soon when we create our last componentmodal dialog triggerwe also take a function that we can call when the name of an image to run has been enteredfor the statere going to record the name of the image that was enteredand also some basic form validation state using the isvalid flagas a reminderthis is what this modal popup is going to look likes just one text field and one buttons fill out the component and have a look at its render methodalso note the constructorwhere can initialise the component state to something defaultexport class newcontainerdialog extends reactmodalstate>{ constructorstate = { imagenameisvalidfalse } } render{ let inputclass = classnamesform-grouphas-errorisvalid }modal id=buttontext=runtitle=create a new containeronbuttonclicked={thisrunimage}>form classname=form-horizontaldiv classname={inputclass}>label htmlfor=imagenamecol-sm-3 control-labelimage name</label>col-sm-9input type=textform-controlonchange={thisonimagenamechange} id=placeholder=eg mongodblatest/>/form>/modal>}}hopefully now you can see how the component is constructing using the generic modal component we created earlierin this configurationthe modal component acts as a higher-order componentwrapping other components inside of itinstead of our new component inheriting from it as we might have otherwise donethe rest of the markup is fairly standard bootstrap markup that defines a form field with a labelthree things to notehoweverwe apply a class to the div that wraps the form elements that is derived from our isvalid state propertyif the form isnthe input box gets a nice red borderand the user can see theyve done something wrongwe specify a handler for the textboxsonchangeeventallowing us to handle and record what the user is typing inwe specify a handler for the generic modals button click - when the user clicks that buttonour new component is going to handle the event and do something specific to our needsll come back to this in a minutelets define that change handler now{ const name = evalue this{ imagenamelength >0 }}all of the form behaviour is captured hereas the user is typing into the boxwe record the input value into the imagename state propertyand also determine whether or not its validfor nows good enough for the image name to have at least one characterwe need to define what happens when the user clicks the button on the modal popupthis is done inside the runimage function{ ifisvalid &return thisisvalid}this should be fairly straightforward - we simply say that if the state of the component is validand the onrunimage handler has been definedwe call it with the name of the image that the user typed inwe also return a value which indicates to the generic modal component that it should close itselfthis happens to just be the same thing is the value of the isvalid flagthats it for this component - lets create a trigger component so that we can open ittriggering the modalthis last component is going to represent the trigger - the thing the user will click on - that opens a modal popups definition is actually very simplecreate a new component calleddialogtriggerand populate it with the followingexport interface dialogtriggerproperties { idstring}export class dialogtrigger extends reactdialogtriggerproperties{ const href = `#${thisid}` returna classname=data-toggle=href={ href }>buttontext }</a>}}for the component propertieswe take the id of the modal we want to triggerand also the text that we want to show on the buttonthen inside the render functiona standard bootstrap link is displayed with button styling and the id of the modal to openre not familiar with bootstrapnote that the actual opening of the dialog is all done with the bootstrap javascript library - all we need to do is specify the data-toggle=attribute and set the href attribute to the id of the modal we want to opentying it all togethernow that we have all of our modal componentswe can put them all togetherhead back to apptsx and import all the components we just createdimport { newcontainerdialog } from/newcontainermodalimport { dialogtrigger } from/dialogtriggers no need to import the generic modal componentas that will be done by the newcontainerdialog componentre not going to use it directly hereupdate the render function so that it contains our new componentsfor the triggerplace it under the headerand for thedialogit just needs to go on the page somewherebootstrap will place it correctly once it has been openeddialogtrigger id=newcontainerdialog id=onrunimage={this} />}note how the id property of dialogtrigger is the same as the id property of newcontainerdialog - this is necessary in order for the trigger to understand that this is the dialog it needs to triggeralso note how the onrunimage property of the dialog component is defined - lets create that now{ namename }}it just sends the name of the image to the server inside a message calledwe can define that now by heading over to serverjs and handling a new message alongside where weve created the otherscreatecontainer{ image{ if{ if{ messageerr }else sockethere we call out to the docker api and its convenient createcontainer methodpassing in the image name that the user typed inthis will not pull new images from the docker hub - it will only start new containers from existing images that exist on the local systemit can certainly be done - ill leave it as an exercise for youto complete in your own timeif were able to create the containerll start itremember our timer that we created earlieronce the container startsthat timer will pick up the new container and display it to all the clients that are connectedif there is an error we can send anmessage back to the socket that sent the originalwhich will be useful for the user so that they are aware that something didnt work as expecteds head back to the app component for the final piece of the puzzleinside the constructor of the apptsx component{ alertjsonhere we simply throw an alert if docker encounters an error running the imagearmed with your new-found react knowledgem sure you can now come up with some fancy ui to make this a lot prettierwrapping upby now you should have a useful but somewhat basic docker dashboardand hopefully the journey has been worth itwith all the socketbe sure to play around with loading your app from multiple sourceslike your desktop browser and mobile phoneand watch them all keep in syncsome things you could continue on with to make it a lot more usefulincludeusing the docker api to pull images instead of simply running themusing the docker api to stream the container logs to the client through socketextending the container dialog form to include options for port mappingvolumescontainer name and more", "image" : "https://cdn.auth0.com/blog/docker/logo.png", "date" : "March 02, 2017" } , { "title" : "The Real Story Behind ECMAScript 4", "description" : "We take a look at what really happened during the ECMAScript 4 era", "author_name" : "Sebastián Peyrott", "author_avatar" : "https://en.gravatar.com/userimage/92476393/001c9ddc5ceb9829b6aaf24f5d28502a.png?size=200", "author_url" : "https://twitter.com/speyrott?lang=en", "tags" : "javascript", "url" : "/the-real-story-behind-es4/", "keyword" : "our javascript history article sparked interesting comments regarding what really happened during the ecmascript 4 erabelow you will find a more detailed perspective of what really went down between 1999 and 2008 in the world of javascriptread ona deeper look onto what really went on with ecmascript 4tweet this a short recapas we explained in detail in our javascript history piecejavascript was originally conceived as aglueprogramming language for designers and amateur programmersit was meant to be a simple scripting language for the webone that could be used for animationspreliminary form checks and dynamic pagestime showedhoweverthat people wanted to do much more with ita year after its release in 1995 netscape took javascript to ecmaa standards organizationto create a standard for javascriptin a waythis was a two-sided efforton one handit was an attempt to keep implementors in checkiekeeping implementations compatibleand it was also a way for other players to be part of the development process without leaving room for classicembraceextendextinguishschemesa major milestone was finally reached in 1999 when ecmascript 3 was releasedthis was the year of netscape navigator 6 and internet explorer 5ajax was just about to be embraced by the web development communityalthough dynamic web pages were already possible through hidden forms and inner framesajax brought a revolution in web functionalityand javascript was at its centerto better understand what happened after 1999we need to take a look at three key playersnetscapemicrosoft and macromediaof these threeonly netscape and microsoft were part of tc-39the ecmascript committeein 1999netscapein 1999some members of tc-39 were already working on ideas for what could be ecmascript 4in particularwaldemar horwat at netscape had begun documenting a series of ideas and proposals for the future evolution of ecmascriptthe earliest draftdated february1999 can be found in the wayback machinean interesting look at the ideas for the next version of ecmascript is outlined in the motivation sectionjavascript is not currently a general-purpose programming languageits strengths are its quick execution from sourcethus enabling it to be distributed in web pages in source formits dynamismand its interfaces to java and other environmentsjavascript 20 is intended to improve upon these strengthswhile adding others such as the abilities to reliably compose javascript programs out of components and libraries and to write object-oriented programs- waldemar horwats early javascript 20 proposalhowevernetscape would not be the first with a public implementation of these ideasit would be microsoftmicrosoftin 1999 microsoft was focused on a revolution of its ownnetit was around the release of internet explorer 5 and ecmascript 3 that microsoft was getting ready to release the first version of thenet frameworka full development platform around a new set of libraries and a common language runtimecapable of providing a convenient execution environment for many different languagesthe first exponents ofnet were c# and visual basicthe firstan entirely new language inspired by java and c++the secondan evolution of its popular visual basic languagetargeting the new platformthenet framework included support for microsofts server-side programming frameworkaspit was perhaps natural that aspnet should provide a javascript-like language as part of its toolsand an implementation of a dynamic language such as javascript could very well serve as a natural demonstration of the capabilities of the common language runtimethus jscriptnet was bornjscriptnet was introduced in 2000it was slated as an evolution of jscriptthe client-side scripting engine used by internet explorerwith a focus on performance and server-side usesa natural fit for thenet architecture and the aspnet platformit would also serve to displace vbscriptanother scripting language developed by microsoft in the90s with heavy inspiration from visual basicnormally used for server-side/desktop scripting tasksone of the design objectives of jscriptnet was to remain largely compatible with existing jscript codein other wordsmostly compatible with ecmascript 3there were implementation differences between jscript and netscapes javascripthowever it was microsofts stated objective to follow the standardit was also one of the objectives from ecmascript 4 to remain compatible with previous versions of the standardin the sense that ecmascript 3 code should run on ecmascript 4 interpretersecmascript 4 wasthusa convenient evolution path for jscriptreleased from the constraints of browser developmentthe team behind jscriptnet could work faster and iterate at their discretionnet becamemuch like macromedias actionscriptanother experimental implementation of many of the ideas behind ecmascript 4in the words of the jscriptnet teamall the new features have been designed in conjunction with other ecma membersits important to note that the language features in the jscriptnet pdc release are not finalwere working with other ecma members to finalize the design as soon as possiblein facttheres an ecma meeting this week at the pdc where well try to sort out some of the remaining issues- introducing jscriptnetin contrast with the first versions of actionscriptthe first releases of jscriptnet in 2000 already included much more functionality from ecmascript 4classesoptional typingpackages and access modifiers were some of its new featuresmacromediaas the internet was becoming popularspearheaded by netscape and its communicator suitea different but no less important battle was taking placevector animation companies futurewave software and macromedia had developed by 1995 two of the leading animation platformsmacromedia shockware and futurewave futuresplashfrom the beginningmacromedia saw the importance of taking its product to the webso with help from netscape it integrated its shockwave player into netscape navigator as a pluginmuch of the work required for havingexternal componentsin the browser had already been done for javaso the needed infrastructure was in placein november 1996macromedia acquired futuresplash and promptly renamed it flashthis made macromedia the sole owner of the two most important vector-based animation tools for the webshockwave and flashfor a timeboth players and authoring tools coexistedbut after a few years flash emerged as the winnerthe combined power of the web platformgetting bigger and bigger by the dayand the push from content creators caused flash to evolve rapidlythe next big step for animation software was to become a platform for interactive applicationsmuch like java offered at the momentbut catering to designers and with special focus on animation performance and authoring toolsthe power of a certain programmability first came to flash in version 21997with actionsactionswere simple operations that could be triggered by user interactionthese operations did not resemble a full programming languageratherthey were limited to simplegoto-styleoperations in the animation timelineby version 41999actions had pretty much evolved into a programming languageloopsconditionalsvariables and other typical language constructs were availablethe limitations of the language were becoming apparent and macromedia was in need of something more matureas it turns outbrowsers already had one such languagejavascriptcatering to non-programmers and already with considerable mindsharejavascript was a sound choiceflash player version 52000drew heavily from ecmascript 3 for its scripting languagecombined with some of the constructs used for previous versionsthis new language expanded actions with many tools from ecmascript such as its prototype-based object model and weak typed variablesmany keywordssuch as varwere also sharedthis new language was called actionscriptby this yearmacromedia was committed to improving ecmascriptthe synergy between javascript in the browser and actionscript in flash was just what macromedia neededmacromedia got a powerful programming languageand at the same time tapped into the mindshare from the already-existing designer-oriented javascript communityit would be in macromedias best interest to see ecmascript succeedflash as a platformthe power of vector-based animationsa convenient editor and a powerful programming language proved to be a killer combinationnot only were more and more end users installing the flash player pluginbut content creators were also producing ever more complex contentflash was quickly becoming a tool for more than just animationsit was becoming a platform to deliver rich contentwith complex business logic behind itin a sensemacromedia had a big advantage compared to the rest of the webit was the sole owner and developer of the platformthat meant it could iterate and improve on it and at a much quicker pace than the web itselfnot only thatit also had the best authoring tools for visual and interactive contentand the mindshare of developers and designers dedicated to this type of contentall of this put macromedia ahead of other playerseven sun and its java languagewith regards to java applets in the browserthe next natural step for macromedia was to move forwardit had the best authoring tools and it was gaining developer mindshareit was only logical to keep investing and advancing the development of its toolsand one of these tools was actionscriptmacromedia saw with good eyes the ideas netscape was putting forth in its ecmascript 4 proposals documentand so began adopting many of them for their own languageat the same timethey knew it was in their best interest to not stray too far away from the general community of javascript developersso they made a good effort to first become compliant with the ecmascript 3 standardit could only do them goodecmascript 4 was slated as the improvement javascript needed for bigger programsand their community would certainly make use of thatalsoby leading the chargethey could have more leverage in the committee to push forward features that workedor even new ideasit was a sound planalthough interest in ecmascript 4 eventually dwindled inside the committeeby 2003 macromedia was ready to release its new version of actionscript as part of flash 7actionscript 20 brought compile-time type checks and classestwo slated features found in the ecmascript 4 draftsand improved compliance with ecmascript 3the years of silencetc-39 and ecma were active between 1999 and 2003horwatslatest draft document is dated august 11macromedia and microsoft continued independently based largely on this draft documentbut no interoperability tests were performed at this stageby 2003work by the committee had all but stoppedthis meant there was no real push for a new release of the ecmascript standardalthough macromedia was about to release actionscript 20 in 2003 and microsoftnet platform was flourishingecmascript 4 was not moving forwardit is important to note that at this stagethe drafts published by horwatwere not exhaustive enough to ensure compatibility between implementationsalthough actionscript and jscriptnet were loosely based on the same draftsthey were not really compatibleworsecode was already being developed using these implementationscode that could potentially become incompatible with the standard in the futurethis was not seen as a big problemas actionscript and jscriptnet were mostly isolated in their own platformsbrowser enginesfor their parthad not advanced as muchsome extensions implemented by the big browsersinternet explorer and netscapewere in usebut nothing big enoughtwo long years passed between when work halted in 2003 until it was resumedin between several significant events took placeinternet explorerthe free browser bundled with windows by microsoftsucceeded in crushing netscape out of the browser marketfirefox was released by mozilla in 2004a new standard integrating xml processing into javascript was released in 2004ecmascript for xmle4xecma-357it gained little traction outside certain browser implementationsmacromedia was acquired by adobe in 2005although it may seem these events are not relatedthey all played a part in the reactivation of tc-39tc-39 comes back to lifethe success of internet explorer on the desktopdue in great part to its bundling with windowsforced netscapes handin 1998they released netscape communicators source code and started the mozilla projectmicrosoft had the majority of the browser market share and aolthen owner of netscapeannounced major layoffsthe mozilla project was to be spun off as an independent entitythe mozilla foundationmozilla would continue the development of geckonetscape navigatorss layout engineand spidermonkeysoonthe mozilla foundation would shift its focus to the development of firefoxa standalone browser without the bloat of the whole suite of applications that came bundled since netscapes daysin an unexpected turn of eventsfirefoxs marketshare commenced to growmicrosoftby the time firefox was releasedhad mostly stagnated with regards to web developmentinternet explorer was the kingandnet was making big inroads in the server marketgetting little traction and developer interestwas left mostly unchangedmicrosoft had no particular interest at this point in reviving ecmascriptthey controlled the browserand jscriptnet was an afterthoughtit would require some prodding to wake them upmacromediaand then adobe after its acquisitionstarted a push toward integration of their internal actionscript work into ecma in 2003they had spent a considerable amount of technical effort on actionscriptand it would only be in their best interest to see that work integrated into ecmascriptthey had the usersthe implementationand the experience to use as leverage inside the committeeat this pointbrendan eichnow part of mozillawas concerned about microsofts stagnation with regards to web technologieshe knew web development was based on consensusat the momentthe biggest player was microsofthe needed their involvement if things were to move forwardtaking notice of macromedias renewed interest in restarting the work on ecmascript 4he realized now was a good time to get the ball rollingthere was interest in the community in standardizing a set of extensions to ecmascript 3 meant to make it easier to manipulate xml dataa prototype of this had been developed by bea systems in 2002 and integrated into mozilla rhinoan alternative javascript engine written in javabea systems took their extension to ecmawas born in 2004as e4x was an ecma standard concerning ecmascriptit was a good way to get the key players from tc-39 working back togethereich used this opportunity to jump-start ecmascript development again by pushing for a second e4x release in 2005by the end of 2005tc-39 was back at work on ecmascript 4although e4x was unrelated to the ecmascript 4 proposalit brought important ideas that would end up being usednamely namespaces and theoperatornow adobetook the work of tc-39 as a clear indication actionscript was a safe betas work progressedadobe continued internal development of actionscript at a fast paceimplementing many of the ideas discussed by the committee in short timein 2006flash 9 was releasedand with it actionscript 3 was also out the doorof features integrated in it was extensiveon top of actionscript 2 classes were addedbyte arraysmapscompile time and runtime type checkingpackagesnamespacesregular expressionseventsproxiesand iteratorsadobe decided to take one more step to make sure things moved forwardin november 2006tamarinadobes in-house actionscript 30 engineused in flash 9was released as open source and donated to the mozilla foundationthis was a clear indication that adobe wanted ecmascript to succeed andif at all possibleto be as little different from actionscript as possiblea colorful fact of history is that macromedia wanted to integrate suns j2me jvm into flash for actionscript 3the internal name for this project wasmaelstromfor legal and strategic reasons this plan never came to fruition and tamarin was born insteadthe falloutwork on ecmascript was progressing and a draft design document with an outline of the expected features of ecmascript 4 was releasedof features had become quite longby 2007tc-39 was composed of more players than at the beginningof particular importance were newcomers yahoo and operafor their own partwere not sold on the idea of ecmascript 4allen wirfs-brocks representative at tc-39viewed the language as too complex for its own goodhis reasons were strictly technicalthough internallymicrosoft also had strategic concernsinternal discussions at microsoft eventually converged on a the idea that ecmascript 4 should take a different courseanother member of the committeedouglas crockford from yahooalso had his concerns about ecmascript 4although perhaps for different technical reasonshe had not been too vocal about themwirfs-brock realized this and convinced crockford it would be a good idea to voice his concernsthis created an impasse in the committeewhich was now not in consensusin crockfords wordssome of the people at microsoft wanted to play hardball on this thingthey wanted to start setting up paper trailsbeginning grievance procedureswanting to do these extra legal thingsi didnt want any part of thatmy disagreement with es4 was strictly technical and i wanted to keep it strictly technicalt want to make it nastier than it had to bei just wanted to try to figure out what the right thing to do wasso i managed to moderate it a little bitbut microsoft still took an extreme positionsaying that they refused to accept any part of es4so the thing got polarizedbut i think it was polarized as a consequence of the es4 team refusing to consider any other opinionsat that moment the committee was not in consensuswhich was a bad thing because a standards group needs to be in consensusa standard should not be controversial- douglas crockford — the state and future of javascriptwirfs-brock put forth the idea of somehow meeting at the middlethe committee decided to split into two work teamsone focused on finding a subset of ecmascript 4 that was still useful but much easier to implementand another team focused on moving forward with ecmascript 4wirfs-brock became the editor of the smallermore focused standardtentatively called ecmascript 31it is important to note that members from both teams worked in both groupsso they were not really separate in this senseas time passedit became clear ecmascript 4 was too big for its own weightthe group did not advance as much as they had hopedand by 2008 many problems still had to be solved before a new standard could be draftedthe ecmascript 31 teamhad made considerable progressecmascript 4 is deadlong live ecmascripta meeting in oslonorwayhad been planned for the committee to establish a way forwardbefore this meeting took placeoff the recordhad made it clear they were planning to withdraw from ecmascript 4 developmentjoining microsoft and yahoo in their stancethis was perhaps the result of seeing ecmascript 4 become too different from actionscript 3the iconic meeting took place in 2008in itthe committee made the hard decisionecmascript 4 was deada new version of ecmascript was to be expectedand a change in direction for future work was draftedbrendan eich broke the official news in an iconic emailthe conclusions of this meeting were tofocus work on es31 with full collaboration from all partiesand target two interoperable implementations by early next yearcollaborate on the next step beyond es3which will include syntactic extensions but which will be more modest than es4 in both semantic and syntactic innovationsome es4 proposals have been deemed unsound for the weband are off the table for goodnamespaces and early bindingthis conclusion is key to harmonyother goals and ideas from es4 are being rephrased to keep consensus in the committeethese include a notion of classes based on existing es3 concepts combined with proposed es31 extensionsecmascript 31 was soon renamed to ecmascript 5 to make it clear it was the way forward and that version 4 was not to be expectedversion 5 was finally released in 2009all major browsersincluding internet explorerwere fully compliant by 2012of particular interest is the wordharmonyin eichs e-mailwas the designated name for the new ecmascript development processto be adopted from ecmascript 6later renamed 2015onwardsharmony would make it possible to develop complex features without falling into the same traps ecmascript 4 experiencedsome of the ideas in ecmascript 4 were recycled in harmonyecmascript 6/2015 finally brought many of the big ideas from ecmascript 4 to ecmascriptothers were completely scrappedwaitwhat happened to actionscriptunfortunately for adobethe death of ecmascript 4 and their decision to stop supporting it meant the large body of work they had performed to keep in sync with the ecmascript 4 proposal wasat least in partuselessof coursethey had a usefulpowerfuland tested language in the form of actionscript 3the community of developers was quite strong as wellit is hard to argue against the idea that they bet on ecmascript 4s success and lostwhich was open-sourced to help adoption and progress of the new standardwas largely ignored by browsersmozilla initially attempted to merge it with spidermonkeybut they later realized performance suffered considerably for certain important use caseswork was neededand ecmascript 4 was not completeso it never got mergedmicrosoft continued improving jscriptopera and google worked on their own clean-room implementationsan interesting take on the matter was exposed by mike chambersan adobe employeeactionscript 3 is not going awayand we are not removing anything from it based on the recent decisionswe will continue to track the ecmascript specificationsbut as we always havewe will innovate and push the web forward when possiblejust as we have done in the past- mike chambersblogit was the hope of actionscript developers that innovation in actionscript would drive features in ecmascriptunfortunatelythis was never the caseand what later came to ecmascript 2015 was in many ways incompatible with actionscripton the other handhad been largely left untouched since the early 2000smicrosoft had long realized developer uptake was just too lowit went into maintenance mode innet 202005and remains available only as a legacy product inside the latest versions ofit does not support features added tonet after version 1such as genericsdelegatesetcan ecmascript timelineasidejavascript use at auth0at auth0 we are heavy users of javascriptfrom our lock library to our back endjavascript powers the core of our operationswe find its asynchronous nature and the low entry barrier for new developers essential to our successwe are eager to see where the language is headed and the impact it will have in its ecosystemsign up for a free auth0 account and take a firsthand look at a production-ready ecosystem written in javascriptand dont worrywe have client libraries for all popular frameworks and platformsconclusionjavascript has a bumpy historythe era of ecmascript 4 development1999-2008is of particular value to language designers and technical committeesit serves as a clear example of how aiming for a release too big for its own weight can result in development hell and stagnationit is also a stark reminder that even when you have an implementation and are at the forefront of developmentthings can go in a completely different directionbeing cutting-edge is always a betthe new process established by the harmony proposal has started to show progressand where ecmascript 4 failed in the pastthe newer ecmascript has succeededprogress cannot be stopped when it comes to the webexciting years are aheadand they cannot come soon enough", "image" : "https://cdn.auth0.com/blog/es6rundown/logo.png", "date" : "March 01, 2017" } , { "title" : "What Cloudbleed Means for You and Your Customers", "description" : "Tavis Ormandy, a vulnerability researcher at Google, discovered that Cloudflare was accidentally leaking sensitive data including passwords, private messages, and more. Learn how this may affect you and your customers and what to do next.", "author_name" : "Diego Poza", "author_avatar" : "https://avatars3.githubusercontent.com/u/604869?v=3&s=200", "author_url" : "https://twitter.com/diegopoza", "tags" : "security", "url" : "/what-cloudbleed-means-for-you-and-your-customers/", "keyword" : "on february 17thtavis ormandya vulnerabilities researcher at googlesent the tweet that kicked off 2017s biggest security story yetcould someone from cloudflare security urgently contact methats not something anyone wants to read on a friday afternoonbut especially not when its coming from one of the worlds top infosec researchersthe problemwhile doing some routine bug checkingormandy saw some data that did not match at all what he expected to findits not unusual to find garbagecorrupt datamislabeled data or just crazy non-conforming databut the format of the data this time was confusing enough that i spent some time trying to debug what had gone wrongwondering if it was a bug in my codehe wrotein factthe data was bizarre enough that some colleagues around the project zero office even got intriguedthe rest of the google project zero teama team of security researchers employed specifically to find new zero-day exploitsprobably expected googles announcement of the first practical sha-1 collision to be the big news of the weekas they looked closerthey found “encryption keyscookiespasswordschunks of post data and even https requests” for all sorts of different sites using cloudflare—private messagesreservationsplaintext api requests from a password managerhotel bookingshow did all of this happencloudflare was dumping memory across the webit begins with a simple bug in the code to cloudflares highly popular reverse proxy servicehttp requests to certain types of sites signed up with cloudflare would trigger the bugwhich would thenleakdata from random cloudflare customerssitesthose that happened to be in memory at the timea simple depiction of how a reverse proxy like cloudflares workssince reverse proxies are shared between customersdata from all was at risk of being disclosedsince septemberrandom bits of data have been coming out of initialized memory and leaking across random cloudflare customers sitesin other wordsif you went to visit a site that used cloudflareyou could have had other peoples sensitive information “leaking” into your own browsing sessionwhen search engines “crawled” these pages to index themthe same kind of information “leaked” because search engines cache the output they receive when they visit a pageall these leaked tokenssecretsand messages were indexedsplit up and spread across millions of pages of search resultssourcewordfencewhile cloudflare is working with the major search engines to purge their caches of sensitive information nows hard to avoid thinking about the paranoid version of eventsparanoid versionstate actorsor state-sponsored actors with significant resourcesdiscovered this vulnerability before google didthey found a way to send manipulated http requests to cloudflare sites that would output a predictable stream of user data—and have either packaged everything they found for resale or set about figuring out how to crack what could be billions of passwordscredit card numbersand secret tokenscloudflare claims they can rule out this scenario with access logsbut whether or not thats true has been subject to some debatewhat you should do nowthe way that data was disclosed means that any site using cloudflare could potentially have had its secrets and tokens compromisedpatreonyelpubermediumfitbitand okcupid all use cloudflare—for a fullof sites you can click hereseveral tools have also been built to assess your risklike this script that crawls your chrome historythough manymany sites use cloudflareat least to some degrees still quite unlikely that your personal information has been exposedmany of the individual sites mentioned have also come out and written posts to explain whether you should be concerned—perhaps most notably1passwordwhich was initially mentioned in taviss tweets as a potentially compromised sitethe first thing you want to do to protect yourself from this and future incidents is to enable multifactor authenticationthe most important sites to enable mfa on are the ones you use as an sso idp—if youre logging into 20 different sites using your google credentialsthen you definitely want to enable some form of mfa therewhile its unlikely your passwords were exposedthis is as good a reminder as any that you should be using strongunique passwords and organizing them using a password managerresetting individual passwords for sites doesnt make much sense—theof compromised sites is simply so vastthe only solution is to do a mass reset and make sure youre using proper password hygiene going forwardwhat you should do for your customersas a site operatoryoure probably wondering 1if youre at riskand 2if any of your usersdata was exposed in this incidentcloudflare has already reached out to domain owners that they have proactively identified as being at risk in this leakhoweverthe absence of notification does not mean that your domainand customersare safefirst offenable multifactor authentication if you havent alreadys the easiest way to get a huge boost to the security of your sitemany site operators have begun forcing password resets for all of their usersif you run a consumer siteit may not be worth the trouble and inconvenienceif your customers are in the enterprisethen it is probably a very good idea to force a changere not going to make your users change their passwordsand from a cursory browse of the web this monday morningit appears that this is mostly the casethen you should at least recommend itas far as data invalidationany secrets which you can easily rotate—session identifierstokenskeys—should immediately be changedcustomer ssl keys do not appear to have been compromised in this incidentbut it would be prudent to change as much data as you can which may have passed through cloudflareif you or any of your customers are regulated by the health insurance portability and accountability acthipaathen youll definitely want to have your security/compliance teams get in touch with your lawyers and discuss whether this is a breach that needs to be reported", "image" : "https://cdn.auth0.com/blog/cloudbleed-post/cloudflare-logo.png", "date" : "February 28, 2017" } , { "title" : "Houghton Mifflin Harcourt Chooses Auth0 to Consolidate Identity", "description" : "In an effort to consolidate various platforms under a single unified experience, Houghton Mifflin Harcourt needed a powerful identity management solution and Auth0 delivered.", "author_name" : "Martin Gontovnikas", "author_avatar" : "https://www.gravatar.com/avatar/df6c864847fba9687d962cb80b482764??s=60", "author_url" : "http://twitter.com/mgonto", "tags" : "press", "url" : "/houghton-mifflin-harcourt-chooses-auth0-for-identity-management/", "keyword" : "bellevuewa - houghton mifflin harcourthmha global learning company specializing in pre-k-12 education content and serviceshas chosen auth0 as their identity management platform for their new educational portalthe new portal allows school districts across the united states and the world to access content and technology through a unified experiencebeing a global organization with many unique technologiesdata storesand customers to servehmh needed a solution that would allow them to consolidate everything through a single unified experience while allowing existing customers to access this new platform through their existing identityauth0 stepped up to challenge and delivered a solution that did just thatexisting customers could log in with their established identities while new customers and users could easily be onboardedperformance was a key consideration for hmhthe company has to deal with large changing user sets and enrollment dataespecially right before the school year beginsauth0 was able to demonstrate exceptional performance with their bulk loader for managing large data sets as well as the platforms capability to handle over 1 billion authentication transactions per dayabout auth0auth0 provides frictionless authentication and authorization for developersthe company makes it easy for developers to implement even the most complex identity solutions for their webmobileand internal applicationsultimatelyauth0 allows developers to control how a persons identity is used with the goal of making the internet saferas of august2016auth0 has raised over $24m from trinity venturesbessemer venture partnersk9 venturessilicon valley bankfounders co-opportland seed fund and nxtp labsand the company is further financially backed with a credit line from silicon valley bankfor more information visit https//auth0com or follow @auth0 on twitter", "image" : "https://cdn.auth0.com/blog/houghton-mifflin-harcourt-pr/HMH_logo.png", "date" : "February 27, 2017" } , { "title" : "SHA-1 Has Been Compromised In Practice", "description" : "The CWI Institute and Google have successfully demonstrated a practical SHA-1 collision attack by publishing two unique PDF files that produce the same hash value.", "author_name" : "Ado Kukic", "author_avatar" : "https://s.gravatar.com/avatar/99c4080f412ccf46b9b564db7f482907?s=200", "author_url" : "https://twitter.com/kukicado", "tags" : "security", "url" : "/sha-1-collision-attack/", "keyword" : "tldr researchers published a technique for causing sha-1 collisions and demonstrated it by providing two unique pdf documents that produced the same sha1 hash valuesecure hash algorithm 1 or sha-1 is a cryptographic hash function designed by the united states national security agency and released in 1995the algorithm was widely adopted in the industry for digital signatures and data integrity purposesfor exampleapplications would use sha-1 to convert plain-text passwords into a hash that would be useless to a hackerunless of coursethe hacker could reverse engineer the hash back into the original passwordwhich they could notas for data integritya sha-1 hash ensured that no two files would have the same hashand even the slightest change in a file would result in a newcompletely unique hashaccording to wikipediathe ideal cryptographic hash function has five main propertiesit is deterministic so the same message always results in the same hashit is quick to compute the hash value for any given messageit is infeasible to generate a message from its hash value except by trying all possible messagesa small change to a message should change the hash value so extensively that the new hash value appears uncorrelated with the old hash valueit is infeasible to find two different messages with the same hash valuein 2005researchers discovered potential vulnerabilities in the sha-1 algorithm and by 2010 many organizations stopped it as it was deemed insecurethe potential vulnerabilities had not been provenuntil todaywhen cwi institute and google demonstrated a practical collision attack against sha-1the researchers were able to provide two unique pdf files that produced the same exact sha-1 hash valuesourceshatterediothe team published a practical technique showing how to generate a collision bringing the fears that sha-1 was insecure to realitythe technique outlined required years of research and immense computation resourcesfrom the research publishedit would take a cluster of 110 powerful gpus running computations 24 hours a day for an entire year to cause a collisionor about 6500 years on a single-cpuso while this attack vector is fairly impracticalit is not impossiblethe sha1 collision attack required 9223372036854775808 sha1 computationstweet this this is a big deal because even though many organization have stopped using sha-1underlying systems still often rely on sha-1software updatesiso checksumspgp signaturesdigital certificate signaturesgitand others still make use of sha-1 for data integrityif a malicious party were able to create a collision for a popular piece of software for exampleand distributed it on the webthey could infect many unsuspecting users causing all sorts of damageon the bright sidethe typical user does not have to worry too muchcertification authorities are forbidden from issuing sha-1 certificatesand google and mozilla will warn users accessing https websites that use sha-1 signed certificatesmore and more organizations are using safer alternatives like sha-256 for their cryptographic needsadditionallysince the published attack vector has only been proven with pdf filesthe team created a websiteiowhich allows you to test your pdf files and see if they could have been compromisedfor mostsecurity is not actively thought aboutbut for us at auth0security is the only thing we think aboutoknot the only thingbut its up therein addition to embracing open authentication standards like oauth and openidwe follow industry standards and best practices for security and when this popped up on our radar we just had to share itlearn more about our security practices herefor more info on the sha-1 collision attack be sure to check out shatteredio and googles security blog postif you are using sha-1please switch to a more secure hasing algorithm like sha-256", "image" : "https://cdn.auth0.com/blog/sha1-collision/logo.png", "date" : "February 24, 2017" } , { "title" : "Auth0 is OpenID Connect Certified", "description" : "Auth0 conforms to OpenID Connect protocol and allows clients to verify the identity of the end-users though a reliable implementation.", "author_name" : "Martin Gontovnikas", "author_avatar" : "https://www.gravatar.com/avatar/df6c864847fba9687d962cb80b482764??s=60design", "author_url" : "http://twitter.com/mgonto", "tags" : "openid", "url" : "/we-are-now-open-id-certified/", "keyword" : "in may of last yearauth0 officially gained certifications for op basic and op config profiles of the openid connect specas of february this yearauth0 has gained two new openid connect certificationsop implicit and hybrid opwith these certifications were thrilled to join the ranks of industry leaders such as googlemicrosoftpaypaland others who are embracing standards based authenticationopenid connectas a layer on top of the oauth 20 authorization protocolallows for decentralized authentication and improves user access to websites and appsgetting certified means ensuring that our implementation of the protocol meets the official specifications as outlined by openidopenid is a cornerstone of the modernopen weband were proud that our implementation has the official stamp of approvalwhat is openid connectopenid connect is an open identity standardit acts as an authentication layerproving who you areon top of the oauth 20 authorization standardgranting you accessa user gets an openid account through an openid identity providerthe user uses that account to sign into any sitea relying partythat accepts openid authenticationfor exampleyoutubethis open-source frameworkprovided by the openid standardlets the userrelying party and the identity provider “just work” togetherinstead of having to sign up on a website and keep track of your passwordsyou only need to sign up once and use that login across various applicationson a websiteit might look something like thisa user is already logged into facebook or googlean identity providerwith a set of credentialsthis set of credentials can then be used to log into another website or applicationthis site or app will ask the usersign up with facebook or googlewhen a user clicks on google or facebooktheyre authorizing that identity provider to back up their claimthen the user is redirected to the website or applicationthis use of linked identities means you only have to manage a single username and password for websiteswith openidusers dont need traditional authentication tokens like a username and passwordall they need is to be registered on a site with an openid identity providerits decentralizedany website can use openid as a way to log users inwhy is openid connect importantbefore openidpeople built site-specific networks with their own signup and login systemsthe idea that you could select your own identity provider for logging into a website and a common standard that would connect all these systems didnt existsome big playerslike facebookhad built their own solutions for ssobut the decentralized openid model was so powerful and beneficial that even they eventually adopted itlike venturebeats eric eldon wrote upon its releasethe point of openidas all of these companies seem to acceptis that users dont want to use just any one service to sign in everywhereinsteadusers should have the choice to log into any site using any other identitymaking it easier for people to log in anywhere just means more people will log in overall — and potentially become users of any of these companiesthink of openid as your drivers license for the entire internetwebsites that use openid wont ask for your information constantlymaking it faster and easier to sign upplusyou can associate information with your openid such as your name and email addressand decide how much websites get to know about yousot bug you for the same information every single time you sign upsince youre uniquely identified over the internetopenid connect is also a good way connect your accounts into a more unified personathe moment you establish yourself as the individual who uses a specific openidwhenever someone sees youre using your openid onlinell know its youif your friend opens a website and sees someone with your openid has made a commentthey can be certain it was younot someone with the same name coincidentallywhy get openid certificationyouve carried rounds and rounds of tests to check your openid specsthe results were great with strong participationso whats the point of getting certifiedcertification ensures credibilityin your own testingyou can pick and choose what aspects of your openid implementation you want to testcertification involves meeting a set of minimum criteria that are standard across the boardand your resultsand the process you used to get thereare open for public oversightwhen youre doneyou can prove that your openid implementation is conformant with the official specs — not just for your customers and potential customersbut for yourself", "image" : "https://cdn.auth0.com/blog/open-id-certified/logo.png", "date" : "February 23, 2017" } , { "title" : "Serverless REST API with Angular, Persistence and Security", "description" : "Develop an Angular app from scratch, with serverless REST API, security and persistence and deploy it to GitHub Pages in no time.", "author_name" : "Bruno Krebs", "author_avatar" : "https://www.gravatar.com/avatar/76ea40cbf67675babe924eecf167b9b8?s=60", "author_url" : "https://twitter.com/brunoskrebs", "tags" : "angular", "url" : "/serverless-angular-app-with-persistence-and-security/", "keyword" : "tldrusing the right toolsyou can create an application from scratch and release it to production very quicklyin this posti will show you how to develop a taskapplicationwith angularthat consumes a serverless rest api and persists data to a mongodb database hosted by mlabthis application will also focus on securitywith auth0and will be deployed to github pagesoverviewin this post i will show you thatwith the right toolsit is possible to start a full stack app—taskapplication in this case—from scratchand release it to production in a short timeour full stack app will support static file hostinga secure rest apiand a robust persistence layerthis is how we will manage all the moving partsidentity management and security supported by auth0 and json web tokensjwtserverless rest api provided by an express app with webtaskpersistence layer with a mongodb database hosted by mlabstatic file hosting via deployment to github pagessince the app that we are going to develop is quite simple in terms of featuresit wont be necessary to have mongodb running on our local environmentwe will use mlab during development as well as productionthe only tools that are expected to be installed are nodejs and npmour application will have the following featuressign in and sign outlist that shows tasks from a userform that allows users to add new tasksa button for each taskto enable users to remove these taskscreating a new angular appwe are going to create our new angular app with angular cliactuallywe will be using this tool during the whole process to create components/services and build our app for productionhere is aof a few commands that we will have to issue to install angular cli and to create our app skeleton# install angular cli globallynpm install -g @angular/cli# create skeletonng new task-&cd task-# serve the skeleton on our dev envng servethe last command is responsible for packaging our application with the development profileand for serving it locally with webpack development serverafter executing all these commandsnavigate to http//localhost4200/ to see it up and runningsecuring angular with auth0the first thing that we are going to take care of in our application is securitysecurity must be a first priority in any application that handles sensitivethird party data like the taskthat we are about to developto startsign up for a free auth0 account and take note of client id and domainboth values are going to be used to configure lockan embeddable login systemimportantauth0 requires aof allowed callback urlsthiscontains all the urls to which auth0 can redirect a user to after issuing a jwttherefore we must configure at least two urlshttp4200/ and the url where our app will be exposedsomething likehttps//brunokrebsgithubio/task-/this url will be defined when we release to github pagesto use lockwe must install two libraries in our applicationauth0-lock and angular2-jwtsince we are using typescript with angularwe will also install the @types/auth0-lock librarywhich provides typescript definitions for lockalsosince we want to provide our users a good looking interfacewe are going to install angular materialthese dependencies are installed with the following commands# auth0 lock and angular 2 jwt runtime depsnpm install --save auth0-lock angular2-jwt @angular/material# types definitions for auth0 locknpm install --save-dev @types/auth0-locklets use angular cli to create a navbarcomponentthis component will have sign in and sign out buttonswe will also create a authservice that will be responsible for sign insign out and to validate if the user is authenticated or not# generates navbarcomponent files under src/app/nav-barng g component nav-bar# generates authservice under src/app/authservicetsng g service authafter executing these commandsangular cli will have created the following file structuresrc-app-nav-barcomponenttshtmlcss]-authtsactually two extra files were createdsrc/app/authspects and src/app/nav-bar/nav-barwe would use these files to write tests for both the component and the servicehoweverfor the sake of simplicitywe wont address testing in this postyou can check the following references to read about testing in angularangular 2 testing in depthservicesangular testingtesting components in angular 2 with jasmineto integrate with locklets first implement src/app/authts with the following codeimport { injectable } from@angular/coreimport auth0lock fromauth0-lockimport { tokennotexpired } fromangular2-jwt// fixmereplace these with your own auth0client idanddomainconst auth0_client_id =your_auth0_client_idconst auth0_domain =your_auth0_domain// this is the key to the jwt in the browser localstorageconst id_token =id_token@injectableexport class authservice { lock = new auth0lockauth0_client_idauth0_domain{}constructor{ // listening toauthenticatedevents thislockonauthresult=>{ localstoragesetitemidtoken}} signin{ thisshow} signout{ localstorageremoveitem} authenticated{ return tokennotexpired}}in the code abovethere are three things that worth mentioningfirstwe must replace auth0_client_id and auth0_domain with the values that we noted previouslysecondthe id_token references the key were the jwt will be savedon the users browser localstorageand thirdthe constructor of this service adds a callback listener to the authenticated event on lockthis callback saves the token issued by auth0 in localstorageto sign out a userit is just a matter of removing this token from localstorageour authservice class is good to gobut unlike componentsangular cli does not add services to our @ngmodule definition by defaultto do thisopen the src/app/appmodulets fileadd this service as a provider and add angular material in the imports array//other importsimport { authservice } from/authimport { materialmodule } from@angular/material@ngmodule{ //other properties imports[ //other imports materialmoduleforroot]providers[ authservice ]other properties}export class appmodule { }we can now focus on implementing our navbarcomponentwe will inject authservice and add three public methods that will be used by our html interfacethen we will implement the interface and add some css rules to improve its open the src/app/nav-bar/nav-barts file and implement the following codeimport { component } fromimport { authservice } from@component{ selectorapp-nav-bartemplateurl/nav-barstyleurls[css]}export class navbarcomponent { constructorprivate authserviceauthservice{ }}this component simply gets authservice injected and nothing elseinjecting a service like this allows the user interface to call its methodsas we will seenows open src/app/nav-bar/nav-barhtml and implement it as follows<md-toolbar color=primary>span>task/span>span class=fill-spacebutton md-buttonclick=signin*ngif=sign in</button>signoutsign out</md-toolbar>our navbar exposes our applications title along with two buttonsat any given timeonly one button is truly visible to the userthe sign in button is going to be visible when the user is not yet authenticated and the sign out will be visible otherwiseto make our interface look betterwe have also added a spanfill-space elementthis element will be responsible to push both buttons to the right borderto accomplish thiswe need to add the css rule that follows to the src/app/nav-bar/nav-barcss filefill-space { flex1 1 auto}goodwe now have both the navbarcomponent and the authservice fully implemented and integratedbut we still need to add this component to our src/app/apphtml fileotherwise it will never get renderedjust replace the content of this file with the following line of codeapp-nav-bar>/app-nav-bar>if we run our application nowit wouldnt look neat because most major browsers come with an 8px margin on body elements and because we havent configured any angular material themewe will fix both issues by updating our src/stylescss file to look like@import~@angular/material/core/theming/prebuilt/indigo-pinkbody { margin0}we are now good to goso lets start our development serverby issuing ng serveand head to http4200 to look how things areyou can even sign in and sign outalthough there wont be much to seeadding a welcome message to visitorsto make our application a friendly places add a welcoming messageto do thatfirst we will add two methods and inject authservice in the src/app/appmaking it look like thisapp-root/appexport class appcomponent { constructor{ }}after that we are going to add the messageas a md-card component from angular materialto src/app/appdiv class="app-container"md-card *ngif=""md-card-title>hellovisitor/md-card-title>md-card-subtitle>please <a="/a>to manage your task/md-card-subtitle>/md-card>/div>and lastwe are going to make a fix to the interface by adding a rule to src/app/appapp-container { padding20px}heading to our app4200/we can see our new welcome messageif we are not authenticatedimplementing serverless rest apinow that we have our application integrated with auth0which allows our users to sign in and sign outs create our serverless rest apithis api will handle post requeststo persist new tasksget requeststo retrieve tasks from a userand delete requeststo remove taskswe will first create a file called tasksjs in a new folder called webtaskand then we will add the following code to ituse strict// imports node modulesconst express = requireexpressconst mongojs = requiremongojsconst bodyparser = requirebody-parserconst jwt = requirejsonwebtoken// creates express app with json body parserconst app = new expressappusebodyparserjson// defines rest apihttp methodsgetgettaskspostaddtaskdeletedeletetask// exports rest apimoduleexports = appfunction addtaskreqres{ let usercollection = loadusercollectionwebtaskcontext// save new task to user collection usercollectionsave{ createdatnew datedescriptionbodydescription }end}function gettasks// retrieves all tasks sorting by descending creation date usercollectionfindsort{ createdat-1 }errdata{ resstatus500200send}function deletetask// removes a task based on its id usercollectionremove{ _idobjectidqueryid}function loadusercollection{ // this secrets are configured when creating the webtask const auth0_secret = webtaskcontextsecretsauth0_secretconst mongo_user = webtaskcontextmongo_userconst mongo_password = webtaskcontextmongo_passwordconst mongo_url = webtaskcontextmongo_url// removes thebearerprefix that comes in the authorization headerlet authorizationheader = webtaskcontextheadersauthorizationauthorizationheader = authorizationheaderreplace// verifies token authenticity let token = jwtverifyauthorizationheader// connects to mongodb and returns the user collection let mongodb = mongojs`${mongo_user}${mongo_password}@${mongo_url}`return mongodbcollectiontokensub}the code is quite simple and easy to understandbut an overall explanation might come in handythe main purpose of this file is to export an express app that handles three http methods for a single routethe main / routethese three methodsas explained beforeallow users to createretrieve and delete tasks from collections on a mongodb databaseevery user will have their own collection—not the best approachsince mongodb can handle a maximum of 24000 collectionsbut good enough to startthis collection is based on the sub claimwhich identifies userpresent in the jwt issued by auth0the last function definition in the tasksjs fileloadusercollectionis actually responsible for two thingssecurity and mongodb connectionwhen a user issues any request to our apithe function verifies if the authorization header sent was actually signed by auth0if none is senta non-user-friendly error is generatedthis is done through the jwtverify function with the help if auth0_secret keythe second responsibilityconnecting to mongodbis handled by the mongojs module and depends on three configuration variablesall these configuration variables—three to connect to mongodb and one to verify auth0 tokens—are passed to webtask when creating the serverless functionwe will see how this is done soonthis is the whole rest api implementationwith this code we are ready to handle users requests that will be sent by the components that we are about to create on our angular appbut there are a few more steps that we need to performcreating a mongodb databaseto make our lives easier and to avoid heaving to install and support mongodb by ourselveswe are going to use mlaba cloud-hosted mongodbthe first thing that we have to do is to head to their website and sign up for a free accountafter verifying our email addresswe have to create a new deploymentsince we are just starting our app and we wont get too much traffics choose the single node plan and the sandbox typewhich provides us 500 mb of db storage for freeyou will also need to type a database namechoose something like task-the last thing that we will have to do is to create a user to connect to this databaseif you choose task-as the name of your databasethis is the link to create usersconfiguring webtask accountwe will also need to create a webtask accountbut this as easy as it can bewebtaskbeing a product of auth0relies on lock and enables us to create an account with one of the following identity providersidpfacebookgoogle or microsoftit is just a matter of hitting a button to create an accountafter choosing an idpwe are presented with a succinctthree-step process demonstrating how to create a hello world serverless functionwe already have a webtask to deploys follow only the first two steps in order to configure the cli tool in our computer# install webtask cli toolnpm install wt-cli -g# initialize it with our email addresswt init me@somewherecomyou will be asked to enter the verification code that was sent to your email addressthis is the final step in the webtask account configurationdeploying our serverless rest apiwith mlab and webtask accounts created and having webtask cli tool correctly configuredwe can now deploy our serverless rest api to productionthis is done with the following codewt create webtask/tasksjs--meta wt-compiler=webtask-tools/express-s auth0_secret=secret-from-auth0com-s mongo_user=task--user-s mongo_password=111222-s mongo_url=ds147069mlab47069/task---prodthe first option passed to the wt tool specifies that we want to create a webtask based on our webtask/tasksthe second parameter identifies our code as being an express appwhich needs to be pre-compiled by webtask with the help of webtask-tools/express toolthe following four parameters are the secrets that we use in our webtask-s prefix denotes them as secretsthe last parameter creates our webtask in production modewhich makes it fasterbe aware that the values above have to be replaced with values that come from our auth0 account and from our mlab accountauth0_secret value can be found at the same place of client id and domainand the last three valuesrelated to mongodbcan be found at mlabs dashboardhaving successfully issued the webtask creation commandwe can now focus on working on the main feature of our angular applicationthe taskbuilding our angular interfacethere are two components that we will need to create to allow users to interact with their task listswe will create a tasklistcomponentto expose the taskand a taskformcomponentthat will allow the user to create new tasksbesides these componentswe will create a tasklistservice that will handle all ajax requestswe will use angular cli to create them to us# creates the main component that lists tasksng g component task-# creates a component to hold a form to add tasksng g component task-/task-form# creates a service to handle all interaction with our rest aping g service task-/task-listintegrating angular with serverless rest apiboth tasklistcomponent and taskformcomponent will depend on tasklistservice to communicate with our serverless rest apis handle the service implementation firstopen the recently created service filesrc/app/task-/task-and insert the following codeimport { observable } fromrxjsimport { authhttp } fromexport class tasklistservice { private static tasks_endpoint =//wt-e1870b8a73b27cdee73c468b8c8e3bc4-0runio/tasksprivate authhttpauthhttp{ } loadtasks$observable<any>{ return thistasklistservicetasks_endpoint} addtask${ descriptiontask }} deletetask$tasks_endpoint +id=+ task_id}}there are three important things to note in this codethe tasks_endpoint constantthis constant must reference the url returned by the wt create command abovethis class is not using http from @angular/httpit is using authhttpwhich is provided by angular2-jwt and which integrates gracefully with auth0-lockinstances of this class automatically send an authorization header with whatever content it finds on id_token key on the user browser localstorageas you may have notedthis is the same place where we stored tokens when configuring authservicethirdall methods in tasklistservice return observablesleaving the caller to decide what to do with the response sent by our serverless rest apito inject tasklistservice in our componentswe need to make a few changes in our main @ngmodulelocated in src/app/appother importsimport { httprequestoptions } from@angular/httpimport { authhttpauthconfig } fromimport { tasklistservice } from// creates a factory to authhttpexport function authhttpfactoryoptionsrequestoptions{ return new authhttpnew authconfig}@ngmodule{ //other properties providers[ authservice// adds new service { provideusefactoryauthhttpfactory// defines how to provide authhttp deps[ httprequestoptions ] } ]bootstrap[appcomponent]}the first change that we made to our module was to add tasklistservice as a providerjust like we did before with authservicethe second change also added a providerbut in a more complex formthe authhttp provider needed help from a factory - declared as authhttpfactory - to be createdthis factory has http and requestoptions as dependenciesso we needed to define the provider as a literal objectpassing this dependencies explicitlylisting tasks with angularour tasklistcomponent can now be implementedwe will now open the src/app/task-ts file and apply the code belowimport { componentoninit } fromapp-task-export class tasklistcomponent implements oninit { private tasksstring[]private tasklistservice{ } ngoninitloadtasks} private loadtasks{ thisloadtasks$subscriberesponse =>tasks = responseerror =>consolelogerror} taskaddedhandleraddtask$} deletetaskdeletetask$}}this class gets tasklistservice injected and add a few callback methods to the observables responsesboth taskadded$ and deletetask$ triggers a call to loadtasks method when the observables respond without errorslog is triggered by these methods to handle cases where errors are issued by the serverless rest apithe loadtasks method calls tasklistserviceloadtasks$ to assign the result to tasks propertywith the three exposed methods and the task property filledwe can now implement the tasklistcomponent interfacewhich resides in the src/app/task-this is what this file should look likemd-card>all your tasks in one placemd-task-item"*ngfor="let task of taskstrackby$index"p>small>strong>{{ taskcreatedatdate'short'}}</strong>/small>/p>description }}<button class="delete"md-button md-raised-button color="accent"delete<*ngif="taskslength == 0"you have no pending tasks/md-here we added a md-provided by angular materialthat iterates through the tasksshowing their creation date and their descriptioneach task got a button that enables users to delete themto make our interface betters add two css rules to the src/app/task-task-item { padding10pxmargin-bottombackground-color#eee}buttondelete { floatrighttop-60px}this will make different tasks distinguishable with a gray background colorand push the delete button to the rightaligning it vertically to the tasknow our interface is ready toso we need to make it visible by adding it to the src/app/appopen it and the tasklistcomponent as follows--card with welcome message -->/app-task-if we open our application in a browserby accessing http4200we would see the following screenour apps completion now depends on implementing the last componenttaskformcomponentto allow users to add tasks to their listsadding tasks with angularto enable a user to add taskswe need to open the src/app/task-/task-form/task-formts file and implement it as followseventemitteroutput } fromapp-task-form/task-formexport class taskformcomponent { @outputtaskadded = new eventemitterpublic taskstring = nulltaskaddedemittask = null}}this component accepts a users task input and emits a taskadded event with the datathis components htmllocated in the src/app/task-is also really simpletask-form"md-input-container>input mdinput [ngmodel]="task"placeholder="new task"/md-input-container>button md-button md-raised-button color="primary"add<when clickedthe add button triggers the addtask method in the componentthis method then triggers the taskadded event emittertasklistcomponent is the component that will listen to these eventswe already implemented a methodcalled taskaddedthat can handle such eventswe just need to update this components html to add taskformcomponent and register the event handlers open src/app/task-html and add the app-task-form tag just before ouras followscard title and subtitle -->taskaddedhandler$event/app-task-form>-->and here we goour app is now fully implemented and ready to go to productionor is itif we play a little with the application we will see that under some conditions the user experience is not that goodthe app takes a while to update the taskwhen a new task is added or an existing one gets deletedso there is room for improvementadding an ajax loading indicatorto solve this issue lets use a small module called angular 2 slim loading barto install it run npm install --save ng2-slim-loading-bar and then open the src/app/appts file to import itother module importsimport { slimloadingbarmodule } fromng2-slim-loading-bardeclarations imports[ //other imports slimloadingbarmoduleproviders and bootstrap}export class appmodule { }we will also import its css rules by adding the following line to the top of our src/styles~ng2-slim-loading-bar/bundles/style/*everything else*/after that we need to make our appcomponent use slimloadingbarserviceto do that lets open src/app/appts and edit as followsother importsimport { slimloadingbarservice } fromcomponent definitionexport class appcomponent { constructorprivate slimloadingslimloadingbarservice{ } //method definitions}slimloadingbarservice contains two methods that we will usestartwhich starts the loading barand completewhich ends the loading indicatorthese methods will be registered as event listeners on tasklistcomponentwe still didnt create event emitters in this componentbut we can configure the listeners in advancehtml and edit like thiswelcome messagestartajaxrequestslimloadingcompleteajaxrequestcomplete-- adds the slim loading bar to our app -->ng2-slim-loading-bar [color]="gold'[height]="4px'/ng2-slim-loading-bar>the last thing we will have to do is edit the src/app/task-ts file to create and use both startajaxrequest and completeajaxrequest event emitters on tasklistcomponentother importsimport { eventemittercomponent definitionexport class tasklistcomponent implements oninit { @outputstartajaxrequest = new eventemitter<void>@outputcompleteajaxrequest = new eventemitter<propertiesconstructor and ngoninit definitions private loadtasks}}here we have create both event emitters and have added them to the three methods that depend on ajax requestwhenever one of these methods gets called we emit an eventthrough thisto make the slim loading bar start running the loading bar indicatorafter getting a response back from the ajax requests sent by the loadtasks methodthat updates the taskwe tell slim loading bar to complete its progress through thisif we run our development server by issuing ng serve and heading to httpwe will see our application with a better user experiencegoing live with github pagesour application is ready to be deployed to productionwe have a persistence layer that saves all userswe have a serverless rest api that accepts getpost and delete requests to manipulate taskswe have securityprovided by auth0and we have a good looking angular single page application interfacethe only thing that is missing is a place to host our staticcss and javascriptfilesthat is exactly what github pages providesto use it is simplewe just need to create a repository and push our work to a branch called gh-pagesthis branch should contain only our production bundlesto create a github repository go to githubsign inor sign up if you dont have an accountand choose the create a new repository optioncreate your new repository naming it as task-note that if you choose another nameyou will have to adjust the base-href parament of the ng build command that we will run laternow we have to add this repository as a remote to our applicationwhen we created our project with angular cliit already came with gitwe just have to add this remotecommit all our changes and push to its master# adds new repo as a remotegit remote add origin git@githubyour-username/your-repogit# commits our codegit addgit commit -mangular app with a secure serverless rest api# push work to new repogit push origin masterhaving our code safewe can now work on the going live tasktwo steps are needed herethe first one is to prepare our code for production and package itagain angular cli comes in handyto do that we just have to issue ng build --prod --base-href=/task-note that we have to set base-href to the exact same name of our github repositoryotherwise our application wont be able to load all the resources and it wont workthe second step used to be handled by angular clibut this command has been removed in the latest releaseso we will need a third party tool to help us herefortunatelythere is one that is very easy to use called angular-cli-ghpagesto install it issue npm install -g angular-cli-ghpagesafter that we just have to execute angular-cli-ghpagesyepwithout any parametersand voilàour app is up and running on github pagesdo not forget to update the allowed callback urls on your auth0 accounttheof allowed urls must have the url where our app was exposedthis should be something like httpsconclusionas we could seewhen we choose the right toolsit gets easy to achieve great accomplishmentswe started with nothingjust an idea to develop a taskand managed to create and release it to the internet with not that much effortwe didnt even have to worry about buildingsupporting and securing servers to host our web application or our databaseif we had to manage these tasks by ourselveswe would take much more time and wouldnt be as confident about our apps securityfault tolerance and scalabilityand this is just the beginningfreeing ourselves from all these issues enables us to focus 100% on our ideas and on what makes our applications unique", "image" : "https://cdn.auth0.com/blog/serverless-angular/logo.png", "date" : "February 22, 2017" } , { "title" : "ReactJS Authentication Tutorial", "description" : "Learn how to quickly build apps with ReactJS and add authentication the right way.", "author_name" : "Prosper Otemuyiwa", "author_avatar" : "https://en.gravatar.com/avatar/1097492785caf9ffeebffeb624202d8f?s=200", "author_url" : "https://twitter.com/unicodeveloper?lang=en", "tags" : "reactjs", "url" : "/reactjs-authentication-tutorial/", "keyword" : "tldrreactjs is a declarativeefficient and flexible javascript library for building user interfacescurrentlyreactjs has over 58000 stars on githubreactjs makes it easy for you to build your web applications in the form of encapsulated components that manage their own statein this tutorialill show you how easy it is to build a web application with reactjs and add authentication to itcheck out the repo to get the codereactjs is a javascript librarybuilt and maintained by facebookit was developed by jordan walkea software engineer at facebookit was open-sourced and announced to the developer community in march 2015since thenit has undergone tremendous growth and adoption in the developer communityin factas at the time of writingreactjs is the 5th most starred project of all time on githubmany web platforms use reactjs to build their user interfacessuch platforms include netflixinstagramairbnbkhanacademywalmart and morethe documentation is very detailedand there is a vibrant community of usersin additiona plethora of reactjs addons exist on github for easy inclusion in your project for whatever functionality you are trying to buildunderstanding key concepts in reactjsreactjs was influenced by xhpan augmentation of php and hack to allow xml syntax for the purpose of creatingand reusable html elementsif youre coming from the world of jquery and dont have experience with frameworks like angularemberor vuejsyou may find reactjs very confusingthere are many questions you might have to ask yourselfsuch aswhy are javascript and html together in one scriptwhat is jsxwhy is the syntax so weirdwhat is a statewhy do we need propswhat are and why do we need components in our appsdont worryyoull have answers to your many questions soonthere are some key concepts you need to know when learning reactonce you have a basic understanding of these conceptsthen youll be able to create your first reactjs app without banging your head on the wallthese key concepts arecomponents - the types and apipropsstatejsxill give a basic overview of these concepts to nourish your understanding of reactjscomponents - the types and apireact is basically about componentsa reactjs app is just one big component made up of interoperable smaller componentsworking with reactjs means you are thinking in components most of the timean example of a component is an html 5 tagsay <header>a header can have attributesit can be styled and also possess its own behaviourin reactjsll be able to build your owncomponent using es6 like soclass customcomponent extends reactcomponent { render{ return<h3>this is mycomponent/h3>}}soyour component will now be <customcomponent>/customcomponent>react provides some methods that are triggered at various points from creating a component up until the component is destroyedthis is called the components lifecycleyou can declare methods to hook into the components lifecycle to control the behaviour of components in your appsome examples of these lifecycle hooks are componentdidmountcomponentwillmountcomponentwillunmountshouldcomponentupdatecomponentwillupdateand morethis method is called before the component is initially renderedso it is called before the render method is executedyou cant perform any type of dom manipulation here because the component isnt available in the dom yetcomponentdidmountthis method is called right after the component has been renderedso it is called immediately after the render method has been executedits the best place to perform network and ajax callsthis method is called right before the component is removed from the domthis method determines if a re-rendering should occur or notit is never called on initial rendering and its always called before the render methodthis method is called as soon as shouldcomponentupdate returns trueit is called just before the component is rendered with new datathere are also methods like render and setstate that you can use to render an element on the dom and set the state of a component respectivelytake this example for a spin and watch how these lifecycle hooks workobserve the sequence of logs in the browser consoleimport react{ component } fromreactimport { render } fromreact-domclass experiment extends component { componentwillmount{ consolelogthis will mount} componentdidmountthis did mount} componentwillunmountthis will unmount} renderi am just rendering like a bossreturn <div>i got rendered/div>}}renderexperiment />documentgetelementbyidrootpropsprops is the short form for propertiesproperties are attributes of a componentprops are how components talk to each othera tag in html such as <img>has an attributeaka prop called src that points to the location of an imagein reactyou can have two componentsfathercomponent and soncomponentlets see how they can talk to each otherclass fathercomponent extends react{ return <soncomponent quality=eye balls/>}}fathercomponentclass soncomponent extends reactcomponent { renderp>i am a true soni have my fathers{ thispropsquality }/p>}}soncomponentnowwhen the page is served and a <fathercomponent>is calleds eyes will be rendered on the pagestatewhen developing reactjs applicationsit is important to know when and when not to use state in componentsthe question now iswhen do i use statewhen do i use propsprops are data that the component depends on to render correctlymost timesit comes from abovemeaning it is passed down from a parent component to a child componentlike propsstate holds information about the component but it is handled differentlyfor examplethe number of times a button was clickeduser input from a formetcwhen state changes in a componentthe component automatically re-renders and updates the dominside a componentstate is managed using a setstate functionclass layout extends reactcomponent { constructor{ superthisstate = { positionright}{ returnstateposition }}}class button extends reactstate = { count0} updatecount{ thissetstateprevstate=>{ return { countcount + 1 } }button onclick={updatecount} >clicked {thiscount} times </button>}}nowthis works great for simple applications like the one well build in this tutorialfor medium and large appsit is recommended to use a state management library like redux or mobx to avoid big balls of messy code and also to help you track every event happening within your appjsxinitiallylooking at jsx seems awkwardjsx is the combination of html and javascript code in the same fileyou can decide to name the extension of the filemjsx or justjsan example of jsx iscomponent { render{ return <hello {thislayoutstructurefrontend layoutbackend layout}<}}you can check out more information on jsx herenexts build an application with reactjsour appchuck norris worldthe app we will build today is called chuck norris worldour app is an eye into the world of chuck norris and his greatnessthe chuck norris world app will display different jokes about the legendof common food jokes will be available to the general publicwhile the celebrity jokes will only be accessible to registered membersnotethese dayscelebrities demand a lot of cash for jokes made at their expenseand chuck norris isnt helping mattersalways cracking jokes about themsighbuild the back-endlets build an api to serve theof jokes to our appwell quickly build the api with nodethe api is simplethis is what we needan endpoint to serve jokes about food - /api/jokes/foodan endpoint to serve jokes about celebrities - /api/jokes/celebritysecure the endpoint that serves celebrity jokesso that it can only be accessed by registered usersgo ahead and fetch the nodejs backend from githubyour serverjs should look like thisuse strictconst express = requireexpressconst app = expressconst jwt = requireexpress-jwtconst cors = requirecorsconst bodyparser = requirebody-parserappusebodyparserjsonurlencoded{ extendedtrue }const authcheck = jwt{ secretauth0_client_secretaudienceauth0_client_idget/api/jokes/foodreqres{ let foodjokes = [ { id99991jokewhen chuck norris was a babyhe didnt suck his mothers breasthis mother served him whiskeystraight out of the bottle{ id99992when chuck norris makes a burritoits main ingredient is real toes99993chuck norris eats steak for every single mealmost times he forgets to kill the cow99994chuck norris doesnt believe in raviolihe stuffs a live turtle with beef and smothers it in pigs blood99995chuck norris recently had the idea to sell his urine as a canned beveragewe know this beverage as red bull99996when chuck norris goes to out to eathe orders a whole chickenbut he only eats its soul} ]foodjokes/api/jokes/celebrity{ let celebrityjokes = [ { id88881as president roosevelt saidwe have nothing to fear but fear itselfand chuck norris88882chuck norris only lets charlie sheen think he is winningchuck won a long time ago88883everything king midas touches turnes to goldeverything chuck norris touches turns up dead88884each time you rate thischuck norris hits obama with charlie sheen and sayswho is winning now88885for charlie sheen winning is just wishful thinkingfor chuck norris its a way of life88886hellen kellers favorite color is chuck norriscelebrityjokeslisten3333consolelistening on localhostserverjsyour packagejson file should look like this{namechuck-norris-jokesversion1descriptionmainscriptstestechoerrorno test specified&exit 1startnode serverdevnodemon serverauthorauth0licensemitdependencies^1152^28^414^34}}notemake sure you have nodemon installed globallypackagejsononce you have cloned the projectrun an npm installthen use postman to serve your routes like soapi serving food jokesapi serving celebrity jokesthe food jokes endpoint should be http//localhost3333/api/jokes/foodthe celebrity jokes endpoint should be http3333/api/jokes/celebrityt worry about the middleware in charge of securing our endpoint for nowll deal with that laternows build our frontend with reactjswootbuild the front-end with reactjsin the early days of reactjsthere was no tool or common way to set up a reactjs apphoweverreact is more mature nowplenty of boilerplatesstartersand open source tools are currently available to help you set up an appthere is one that stands out because of its simplicitys called the create-react-appcracli tools being maintained by facebookwe have areact script that comes bundled with auth0 authenticationso you can use create-react-app to boostrap an app with authentication support like this create-react-app my-app --scripts-version auth0-react-scriptsgo ahead and install the cra tool globally like sonpm install -g create-react-appafter installing globallygo ahead and scaffold a new reactjs app like socreate-react-app chucknorrisworldthen open http3000 to see your appcreate-react-app automatically invokes yarn for installationif you dont have yarn installedit falls back to use npms check out the structure of our newly scaffolded appmy-app/ readmemd node_modules/ - all the packages required for the react app resides here packagejson - file that contains the names of all the packages residing in node_modules folder public/ indexhtml - index file that declares the root div where the app component is been bound to faviconico - the apps favicon src/ appcss - file that contains styles for the app component appjs - basic app component appjs - test file that contains tests for the app component indexcss - file that contains style for root div indexjs - javascript file that binds the root div to the parent app component logosvgwe will work with this structure but make some few modificationsfirstdelete the appjs filewe are not writing any tests for this applications out of the scope of this tutorialif you want to learn how to test your reactjs applicationscheck out testing react applications with jestmake the following modifications like socreate a folder called components inside the src directorythis will house our componentscreate a celebrityjokesjs file inside the components directorythis component will take care of fetching the celebrity jokes and displaying them to the usercreate a foodjokesthis component will take care of fetching the food jokes and displaying them to the usercreate a navthis component will be in charge of our navigation throughout the appcreate a folder called utils inside the src directorythis will house our helper functionsdelete appare you surprisedwe wont need itfetch the api datathe first thing we need to do is to fetch the api data from our node backend to display in our appmake sure the node server is runnings create a helper file to handle fetching the apicreate a chucknorris-apijs file inside the utils directoryopen up the file and add code to it like soimport axios fromaxiosconst base_url =httpexport {getfooddatagetcelebritydata}function getfooddata{ const url = `${base_url}/api/jokes/food`return axiosurlthenresponse =>responsedata}function getcelebritydata{ const url = `${base_url}/api/jokes/celebrity`}chucknorris-apijsnoteinstall axios in your app by running npm install axios --savewe are using a very good promise based http clientan alternative for this is superagentin the getfooddata and getcelebritydata functionsaxios fetches data from the api endpointsthen we do thisto make them ready for use in our componentsbuild the nav componentthe navjs file is our nav componentgo ahead and add code to it like soimport { link } fromreact-routerimport/appcssclass nav extends component { rendernav classname=navbar navbar-default>div classname=navbar-headerlink classname=navbar-brandto=/chuck norris world</link>ul classname=nav navbar-navli>link to=food jokes</li>/specialcelebrity jokes</ul>nav navbar-nav navbar-rightbutton classname=btn btn-info loglog in<btn btn-danger loglog out </nav>}}export default navopen up your terminal and install react-router like sonpm install react-router@30 --saveat the time of this writingreact-router is in 40 alphaso you can explore its featuresthe link component from react-router enables seamless client-side transition between routes without any page reloadbuild the celebrityjokes and foodjokes componentby defaultthese two components will look similar in functionalitiesthey both display data from different endpointss start with the foodjokes componentimport nav from/navimport { getfooddata } from/utils/chucknorris-apiclass foodjokes extends component { constructorstate = { jokes[] }} getfoodjokes{ getfooddatajokes{ this{ jokes }getfoodjokes{ const { jokes } = thisreturnnav />h3 classname=text-centerchuck norris food jokes<hr/>{ jokesmapindexcol-sm-6key={index}>panel panel-primarypanel-headingpanel-titlespan classname=btn#{ jokeid }</span>panel-body{ jokejoke } <} <col-sm-12jumbotron text-centerh2>get access to celebrity jokes by logging in</h2>view celebrity jokes<btn btn-lg btn-successcelebrity jokes <}}export default foodjokeslearn why i use superin the class constructors analyze the code abovethe foodjoke component is pulling data from an apiso it needs a way of holding that datathats where state comes inyou can use props to pass data around and use state to hold/manage that datain the constructorwe define the initial state as seen in the code belowconstructorin the getfoodjokes methodwe call the getfooddata method we exported from the chucknorris-apijs helper file and set state as seen belowwe took advantage of one of the reactjs lifecycle hookswhatever is defined in this method is applied immediately after a component is mounted on the browser screensowe invoked the getfoodjokes method in the hook as seen belowall we are trying to do is tell reactjs to load the data from the api immediately the foodjokes component gets renderedfinallywe rendered the component with the reactjs render methodthis is the method that does the actual rendering on the screenas seen in the code belowwe extracted the loaded jokes from the state into a jokes constantwe looped through the jokes constant which is now an array to display the contents on the screenwhen you loop through some form of datayou have to provide the key property and make sure it has a unique valueelse an error will be thrownconst { jokes } = thiss build the celebrityjokes component in the same wayimport { getcelebritydata } fromclass celebrityjokes extends component { constructor} getcelebrityjokes{ getcelebritydatagetcelebrityjokes{ const { jokes } = thisprivileged chuck norris celebrity jokes<panel panel-dangerview food jokes<chuck norris food jokes <}}export default celebrityjokesjsgrab your coffee at this point because you have successfully created the navand foodjokes componentswhoopwe need to take care of one more component so that our app can functioncan you guessyesthe root componentbuild the root componentthis is the component where we get to define how routing should work in our application and also bind it to the root div that holds the whole appopen up indexjs and add code to it like soimport react fromimport reactdom fromimport celebrityjokes from/components/celebrityjokesimport foodjokes from/components/foodjokesimport { routerroutebrowserhistory } fromconst root ={ returncontainerrouter history={browserhistory}>route path=component={foodjokes}/>component={celebrityjokes}/>/router>}reactdomrenderroot />jsyou might quickly notice that we are not defining a class hererather we just defined a root functionreactjs allows you to do thatwe imported the router from react-routerthe routing is simplewe have defined it to display the foodjokes component once a user hits the / routeit displays the celebrityjokes component once a user hits the /special routethe beginners guide to react router will give you a better understanding of how routing works in reactjsthis reactdomrenders the root component in the root divwhich is the starting point of our reactjs applicationwe imported all the required components like sojust a few things before we check our application in the browseropen up public/indexhtml and add bootstrapnow the content of the html file should look like thisdoctype html>html lang="en"head>meta charset="utf-8"meta name="viewport"content="width=device-widthinitial-scale=1"link rel="shortcut icon"href="%public_url%/faviconico"link href="https//maxcdnbootstrapcdncom/bootstrap/337/css/bootstrapmincss"rel="stylesheet"-- notice the use of %public_url% in the tag aboveit will be replaced with the url of the `public` folder during the buildonly files inside the `public` folder can be referenced from the htmlunlike "/faviconor "favicon"will work correctly both with client-side routing and a non-root public urllearn how to configure a non-root public url by running `npm run build`-->title>react app</title>/head>body>div id="root"-- this html file is a templateif you open it directly in the browseryou will see an empty pageyou can add webfontsmeta tagsor analytics to this filethe build step will place the bundled scripts into the <tagto begin the developmentrun `npm start`to create a production bundleuse `npm run build`/body>/html>open up appcss and add this style like sonavbar-right { margin-right0pximportant}log { margin5px 10px 0 0}feel free to check out your application in the browserright nowyou should have something like thishomepagecelebritypagecurrent applicationadding authentication to your reactjs appthe majority of the apps we use on a daily basis have a means of authenticating usersll show you how to easily add authentication to our reactjs applicationll use auth0 as our authentication serviceauth0 allows us to issue json web tokensjwtst already have an auth0 accountsign up for a free one nowlog into your auth0 management dashboard and navigate to the client app you wish to useget the domainclient idand client secret of this appll need them soonsecure the node apiwe need to secure the api so that the celebrity endpoint will only be accessible to authenticated userswe can secure it easily with auth0open up your serverjs file and replace the auth0_client_id and auth0_client_secret variables with your client id and client secret respectivelythen add the authcheck middleware to the celebrity endpoint like soauthcheckchuck norris only lets charlie sheen think he is winningyou should load these values from environment variables for security reasonsno one should have access to your auth0 secrettry accessing the http3333/api/jokes/celebrity endpoint again from postmanyou should be denied access like sounauthorized accessnexts add authentication to our front-endadding authentication to our reactjs front-endwell create an authentication service to handle everything about authentication in our appgo ahead and create an authservicebefore we add codeyou need to install jwt-decode and auth0-lock node packages like sonpm install jwt-decode auth0-lock --saveopen up the authservicejs file and add code to it like soimport decode fromjwt-decodeimport { browserhistory } fromimport auth0lock fromauth0-lockconst id_token_key =id_tokenconst lock = new auth0lockauth0_domain{ auth{ redirecturl`${windowlocationorigin}`responsetypetoken} }lockonauthenticatedauthresult =>{ setidtokenauthresultidtokenbrowserhistorypushexport function loginoptions{ lockshowreturn { hide{ lockhide} }}export function logout{ clearidtokenreplace}export function requireauthnextstate{ ifisloggedin{ replace{pathname}}function setidtoken{ localstoragesetitemid_token_key}function getidtoken{ return localstoragegetitem}function clearidtokenremoveitem}export function isloggedin{ const idtoken = getidtokenidtoken &istokenexpired}function gettokenexpirationdateencodedtoken{ const token = decodeifexp{ return null} const date = new datedatesetutcsecondsreturn date}function istokenexpired{ const expirationdate = gettokenexpirationdatereturn expirationdate <new date}in the code abovewe created an instance of auth0 lock and passed in our credentialswe also listened on the authenticated eventit grabs the id_token returned from the auth0 server and stores it in localstoragethe logout function deletes the token and directs us back to the homepagewe also checked whether the token has expired via the gettokenexpirationdate and istokenexpired methodsthe isloggedin method returns true or false based on the presence and validity of a user id_tokenwe implemented a middlewarethe requireauth methodll use this method to protect the /special route from being accessed for non-loggedin userss go update the nav component to hide/show the login and logout buttons based on the users authentication statusyour nav component should look like thisimport { loginlogoutisloggedin } from/utils/authservice} <onclick={}>loginnavwe used an arrow function to wrap and execute the onclick handlers like socheck out how to handle events in react with arrow function to understand why we used arrow functionswe imported loginlogout and isloggedin functions from the authservicewe attached the loginand logoutfunctions to the login and logout buttons respectivelywe also hid the /special link by checking the authentication status of the user via the isloggedinfunctionopen up the foodjokes component and modify it like soimport { isloggedin } from{ isloggedinwe are enabling the link to celebrity jokes based on the login status of a user via the isloggedinmethodadd some values to auth0 dashboardjust before you try to log in or sign uphead over to your auth0 dashboard and add http3000 to the allowed callback urls and allowed originsallowed callback urlsallowed originssecure the special routewe need to ensure that no one can go to the browser and just type /special to access the celebrity routejs and add an onenter prop with a value of requireauth to the /special route like soimport { requireauth } fromcomponent={celebrityjokes} onenter={requireauth} />}indexjsnowtry to log inlock login widgetlogged inbut unauthorized to see the celebrity contentoopswe have successfully logged in but the content of the celebrity jokes is not showing up and in the consolewe are getting a 401 unauthorized errorwhys simplewe secured our endpoint earlierbut right now we are not passing the jwt to the backend yetwe need to send the jwt along with our request as a header to enable the secured endpoints recognition of the logged-in userupdating the authservice &chucknorris api helperopen up utils/authservicejs and make the getidtokenfunction exportable like soexport function getidtokenadding an export just before the function makes it exportablego ahead and open up the utils/chucknorris-apiwe will tweak the getcelebritydata function a bitit initiates a get request only to fetch data from the apiwe will pass an option to send an authorization header with a bearer token along with the get request like sofunction getcelebritydata{ headers{ authorization`bearer ${getidtoken}` }}}the /api/jokes/celebrity endpoint will receive the token in the header and validate the userif it is validthe content will be provided to ustry to log in againworking chuck norris world appeverything is working finepat yourself on the backyou have just successfully built a reactjs app and added authentication to itconclusionreactjs is an awesome front-end library to employ in building your user interfacesit takes advantage of the virtual domit is fast and it has a bubbling communitythere are several react plugins/addons that the community provides to allow you do almost anything in reactjsauth0 can help secure your reactjs apps with more than just username-password authenticationit provides features like multifactor authanomaly detectionenterprise federationsingle sign onssosign up today so you can focus on building features unique to your app", "image" : "https://cdn.auth0.com/blog/blog/React-logo.png", "date" : "February 21, 2017" } , { "title" : "Introducing Auth0 Hooks", "description" : "Customize Auth0 platform with Node.js using Auth0 Hooks, a new extensibility mechanism powered by Webtasks.", "author_name" : "Tomasz Janczuk", "author_avatar" : "https://s.gravatar.com/avatar/53f70144dc9d7c76455fa91f858d4cec?s=200", "author_url" : "https://twitter.com/tjanczuk?lang=en", "tags" : "product", "url" : "/introducing-auth0-hooks/", "keyword" : "auth0 hooks are a new extensibility mechanism in auth0 that allows you to customize the behavior of our platform using nodejsdevelopers love code and extensibilitycustomization flexibility has always been an integral part of the auth0 platformuntil nowyou could use auth0 rules to execute arbitrary nodejs code during an authorization transactiontodaywe are introducing auth0 hooksa new and improved mechanism to extend the auth0 platform using codebetter developer experiencewhile auth0 hooks are building on the same underlying webtask technology we have developed to run auth0 rulesseveral aspects of the developer experience are improvedusing the management dashboard you can createmove in and out of productionand edit hooks for selected extensibility points in the auth0 platformyou edit hooks in the webtask editorwhich offers a much richer featureset compared to the experience you are used to with auth0 rulessyntax completion allows you write the code faster without referring to documentationintegrated secret management improves the security of your code by providing a mechanism to securely store secrets while making them conveniently available in codeintegrated runner allows you to test your code without leaving the webtask editorreal-time logs simplify debugging by streaming the output generated by your codegithub integration allows you synchronize your hook with code stored in a github repositoryupdating your hook is as simple as pushing to githubusing the auth0 cli you can scaffoldcreateactivateand deactive hooks from the command linewhat can you do todaythe initial release of auth0 hooks supports customizing the behavior of auth0 at three new extensibility pointsclient credentials exchange allows you to change the scopes and addclaims to issued access tokenspre user registration allows you to intercept creation of a new database user to enforcepassword policyor employ application specific logic to prevent the signuppost user registration allows you to perform any actions as a result of a succcessful creation of a new database useregsend a message to slackor create a record in your crm systemthis is just the beginningwe are going to be adding many more extensibility points in the auth0 platform using the auth0 hooks mechanism in the futureauth0 hooks vs auth0 rulesintroduction of auth0 hooks does not affect any existing auth0 rulesyour rules continue to work unchangedauth0 hooks provide a foundation for a new extensibility mechanism in auth0all future extensibility points in the platfrom will build on top of auth0 hookswe are also planning to add support in auth0 hooks for the same things you use auth0 rules for todaydifferences with auth0 rulesif you have been using auth0 rules beforethese are some of the key differences in the development experience when moving on to auth0 hooksin auth0 rulesyou are editing code on the auth0 management dashboardwhen using auth0 hooksyou edit code in the webtask editorwhen using auth0 rulesyou are specifying rule configuration common to all rules on the auth0 management dashboardauth0 hooks allow you to specify secret configuration directly in the webtask editorand separately for each hookwhen developing auth0 rulesyou can dry run a rule from within the auth0 management dashboardauth0 hooks can be tested from within the webtask editor using the integrated runner and access to real-time logsthere is no command line tool to manipulate auth0 rulesauth0 hooks come with the auth0 cli tooland can also be manipulated using the lower level webtask cli toolauth0 management http apis offer a way to manipulate auth0 rules using any http clientauth0 hooks are managed using webtask management apislearn morecheck out the auth0 hooks documentation or head over directly to the auth0 hooks management dashboard to create your first hook", "image" : "https://cdn.auth0.com/blog/auth0-webhooks-announcements/hooks_logo.png", "date" : "February 17, 2017" } , { "title" : "Announcing the Guardian Whitelabel SDK", "description" : "Learn about the Guardian Whitelabel SDK and how you can easily build your own authenticator leveraging our battle-tested solution.", "author_name" : "Prosper Otemuyiwa", "author_avatar" : "https://en.gravatar.com/avatar/1097492785caf9ffeebffeb624202d8f?s=200", "author_url" : "http://twitter.com/unicodeveloper?lang=en", "tags" : "guardian", "url" : "/announcing-guardian-whitelabel-sdk/", "keyword" : "on november 232016we tagged the first release of guardian for ios and androida whitelabel sdk to help usersdevelopersand organizations build their own authenticator and guardian-like applicationsread on to find out how it works and how you can use it in your projectsthe guardian whitelabel sdk helps you build your own authenticator and guardian-like applicationstweet this white-label multifactoryou can use the guardian mobile sdks - available for ios and android to build your own white-label multifactor authentication application with complete control over the branding and look-and-feelguardianguardian is auth0s multifactor authentication solution that provides a simple and secure way to implement multifactor authenticationit also supports push notificationsremoving the need for one time pass codes for a truly frictionless multifactor experiencethe guardian app can be downloaded from the app store or from google playwith the guardian sdkios and androidyou can build your ownmobile applications that works like guardian or integrate some guardian functionalitiessuch as receiving push notifications in your existing mobile applicationsa typical scenario could bewhile building a banking appyou can make use of the guardian sdk in your existing mobile app to receive and confirm push notifications when someone performs an atm transactionhow can i use ittake a look at the ios and android docsyou can also just enable push notifications and sms by toggling the buttons below from the auth0 dashboardpush notifications and smsconclusionthe guardian mobile sdk opens up a myriad of opportunities for developers and organizations wishing to leverage an already securetested and existing solution for building and enhancing their mobile appstry it today", "image" : "https://cdn.auth0.com/blog/guardian/Guardianlogo.png", "date" : "February 16, 2017" } , { "title" : "Angular Testing In Depth: HTTP Services", "description" : "Learn how to test HTTP services in Angular. We will start by writing tests for requests and finish by refactoring them to a cleaner format.", "author_name" : "Gábor Soós", "author_avatar" : "https://secure.gravatar.com/avatar/9d2e715baab928f5bedb837bfcb70b2b", "author_url" : "https://twitter.com/blacksonic86", "tags" : "angular2", "url" : "/angular-testing-in-depth-http-services/", "keyword" : "get themigrating an angular 1 app to angular 2 bookfor freespread the word and download it nowwhen we write a web applicationmost of the time it has a backendthe most straightforward way to communicate with the backend is with http requeststhese requests are crucial for the applicationso we need to test themmore importantlythese tests need to be isolated from the outside worldin this article i will show you how to test your requests properly and elegantlythis article is the second part of a series in whichi share my experiences testing different building blocks of an angular applicationit relies heavily on dependency injection based testing andit is recommended that you read the first partif you are not familiar with the conceptsserviceshttp servicesthis articlecomponentspipesroutingtesting our first request to get started we will test a basic requestthe get requestit will call a parameterized url without a body or additional headersthe github api has an endpoint for retrieving public profile information about usersthe profile information is returned in json formatimport { injectable } from@angular/coreimport { httpresponse } from@angular/httpimportrxjs/add/operator/map@injectableexport class githubservice { constructorprivate httphttp{} getprofileusernamestring{ return thisget`https//apigithubcom/users/${username}`mapresponse=>json}}the getprofile method sends a get request to the api and returns the responseevery request made with the httpmodule returns an observablethe returned value will always be a response objectwhich can return the response bodywith the help of the json or text method we can transform the value of the observablethe first thing we have to do is to set up the test dependenciesthe http dependency is requiredif we dont provide itwe will get this error messageno provider for httpbeforeeach{ testbedconfiguretestingmodule{ providers[githubservice]imports[httpmodule] }}the problem with the real httpmodule is that we will end up sending real http requestsit is an absolutely terrible idea to do this with unit testsbecause it breaks the tests isolation from the outside worldunder no circumstances will the result of the test be guaranteedfor examplethe network can go down and our well-crafted tests will no longer workinsteadangular has a built-in way to fake http requestsimport { mockbackendmockconnection } from@angular/http/testingbaserequestoptionsresponseoptionsrequestmethod } from[ githubservicemockbackend{ provideusefactorydefaultoptionsrequestoptions{ return new httpdeps[mockbackendbaserequestoptions] } ] }instead of providing http as a moduleit is better to use the factory provider and pass the mockbackend instance to the http constructorthis way the fake backend captures every request and can respond accordinglybefore writing the first test it is also important to get an instance of the mockbackendbecause without it we wont be able to respond to requestsinject[githubservicemockbackend]{ subject = githubbackend = mockbackendlets write the first test that checks the result of the requestitshould get profile data of userdone{ let profileinfo = { loginsonicid325nametesterbackendconnectionssubscribeconnectionmockconnection{ let options = new responseoptions{ bodyprofileinfo }mockrespondnew responseoptionssubjectgetprofileblacksonic{ expecttoequalprofileinforequests made are available through the connections property of the fake backend as an observablewhen it receives the request through the subscribe method we can respond with a json objectin our example only the response body is setin additionyou can set the status and the headers of the requestanother new element is the done callback that is passed into the test functionit is needed when writing asynchronous teststhis way the test doesnt end when the execution of the function endsit will wait until the done callback is calledof coursethere is a timeout for hanging tests that dont call this done method within a given intervalhttp requests are asynchronous by naturebut the fake backend we use responds to them synchronouslyit calls the subscribe method synchronouslyyou may wonder what makes the test asynchronousthenthe answer is false positive testsif we comment out the response to the requestthe test will still passeven though we have an assertionthe problem here is that the subscribe callback never gets executed if we dont respond to the request{ // backendchecking the requestuntil now we havent made any assertions for the requestwhat was the called urlor what was the method of the requestto make the test more strict we have to check these parameters{ expectrequesturlhttpscom/users/blacksonicexpectmethodrequestmethodthe original request object resides on the mockconnection objectwith its url and method propertywe can add the assertions easilydigging deeperget requests are good for retrieving databut well make use of other http verbs to send dataone example is postuser authentication is a perfect fit for post requestswhen modifying data stored on the server we need to restrict access to itthis is usually done with a post request on the login pageauth0 provides a good solution for handling user authenticationit has a feature to authenticate users based on username and passwordto demonstrate how to test post requestswe will send a request to the auth0 apiwe wont be using their recommended package herebecause it would abstract out the actual requestbut for real-world scenarios i would recommend using itexport class auth0service { constructor{} loginpassword{ let headers = new headers{content-typeapplication/jsonlet options = new requestoptions{ headers }return thispost//blacksoniceuauth0comcom/usernamepassword/login{ usernameclient_idyour_client_idtext}}the main difference between this example and the previous one is that here we are sending a json payload to the serverand appending additional headers onto itwe dont have to manually jsonstringify the payload --- the request methods will take care of itthe response will be in text formatso this time we dont have to convert anything to jsons look at the test to see how we can check every detail of the requestshould be called with proper arguments{ backendheadersgetbodystringify{ usernamesecretnull2login<form />the headers are also available on the request object and can be checked one by onethe payload can be retrieved with the getbody methodthis method always returns the body converted to a stringwhich will we see in the network trafficwhen we send json it will contain the output of the jsonstringify methodprinted with spaces and an indentation of tworefactoringthe previous setup worksbut it has multiple problemsfor every service we testthe provider configuration will be exactly the samethe subscription to the outgoing connection responds the same immediatelyregardless of the urlthe assertions are verbose and hard to readthose who have tested their http services in angularjs may remember how simple the setup was for those testsangularjs provided convenient methods for setting expectations on requestsangular doesnt have those built-in functionalitiesbut very similar ones are present in the ngx-http-test libraryit can solve the problems mentioned earliers look at the test with the library for the github profile fetchimport { fakebackend } fromngx-http-testdescribegithubservicerefactored{ testbed{ providers[ githubservicefakebackendgetproviders] }fakebackend]{ subject = githubbackend = fakebackend{ backendexpectgetrespond{ expectthe setup becomes a function call to fakebackendsetting the expectation hides the subscription and gives more readable methods like expectgetthe login test also becomes less verboseexpectpost{ usernameresponseform/form>conclusionwhat weve learned about angular http testingin this tutorialwe managed tosetup tests and fake an http backendwrite assertions for requestsrefactor the tests to be more readableangular has the tools to test http requestsbut still lacks the readable assertion methods that were present in angularjsuntil such methods are implementedthe ngx-http-test library can be usedto see the tests in action check outthis github repository", "image" : "https://cdn.auth0.com/blog/angular/logo.png", "date" : "February 15, 2017" } , { "title" : "Glossary of Modern JavaScript Concepts: Part 1", "description" : "Learn the fundamentals of functional programming, reactive programming, and functional reactive programming in JavaScript.", "author_name" : "Kim Maida", "author_avatar" : "https://en.gravatar.com/userimage/20807150/4c9e5bd34750ec1dcedd71cb40b4a9ba.png", "author_url" : "https://twitter.com/KimMaida", "tags" : "javascript", "url" : "/glossary-of-modern-javascript-concepts/", "keyword" : "tldrin the first part of the glossary of modern js concepts serieswell gain an understanding of functional programmingreactive programmingand functional reactive programmingto do soll learn about puritystatefulness and statelessnessimmutability and mutabilityimperative and declarative programminghigher-order functionsobservablesand the fprpand frp paradigmsintroductionmodern javascript has experienced massive proliferation over recent years and shows no signs of slowingnumerous concepts appearing in js blogs and documentation are still unfamiliar to many front-end developersin this post seriesll learn intermediate and advanced concepts in the current front-end programming landscape and explore how they apply to modern javascriptconceptsin this articlell address concepts that are crucial to understanding functional programmingand functional reactive programming and their use with javascriptyou can jump straight into each concept hereor continue reading to learn about them in orderpuritypure functionsimpure functionsside effectsstatestateful and statelessimmutability and mutabilityimperative and declarative programminghigher-order functionsfunctional programmingobservableshot and coldreactive programmingfunctional reactive programmingpurityside effectspure functionsa pure functions return value is determined only by its input valuesargumentswith no side effectswhen given the same argumentthe result will always be the samehere is an examplefunction halfx{ return x / 2}the halffunction takes a number x and returns a value of half of xif we pass an argument of 8 to this functionthe function will always return 4when invokeda pure function can be replaced by its resultfor examplewe could replace half8with 4 wherever used in our code with no change to the final outcomethis is called referential transparencypure functions only depend on whats passed to thema pure function cannot reference variables from a parent scope unless they are explicitly passed into the function as argumentseven thenthe function can not modify the parent scopevar somenum = 8// this is not a pure functionfunction impurehalf{ return somenum / 2}in summarypure functions must take argumentsthe same inputwill always produce the same outputreturnpure functions rely only on local state and do not mutate external statenoteconsolelog changes global statepure functions do not produce side effectspure functions cannot call impure functionsimpure functionsan impure function mutates state outside its scopeany function that has side effectssee belowis impureprocedural functions with no utilized return value are also impureconsider the following examples// impure function producing a side effectfunction showalert{ alertthis is a side effect}// impure function mutating external statevar globalval = 1function incrementglobalval{ globalval += x}// impure function calling pure functions procedurallyfunction proceduralfn{ const result1 = purefnfirst1const result2 = purefnlast2log`done with ${result1} and ${result2}`}// impure function that resembles a pure function// but returns different results given the same inputsfunction getrandomrangeminmax{ return mathrandom*max - min+ min}side effects in javascriptwhen a function or expression modifies state outside its own contextthe result is a side effectexamples of side effects include making a call to an apimanipulating the domraising an alert dialogwriting to a databaseetcif a function produces side effectsit is considered impurefunctions that cause side effects are less predictable and harder to test since they result in changes outside their local scopepurity takeawaysplenty of quality code consists of impure functions that procedurally invoke pure functionsthis still produces advantages for testing and immutabilityreferential transparency also enables memoizationcaching and storing function call results and reusing the cached results when the same inputs are used againit can be a challenge to determine when functions are truly pureto learn more about puritycheck out the following resourcespure versus impure functionsmaster the javascript interviewwhat is a pure functionfunctional programmingpure functionsstatestate refers to the information a program has access to and can operate on at a point in timethis includes data stored in memory as well as os memoryinput/output portsdatabasethe contents of variables in an application at any given instant are representative of the applications statestatefulstateful programsappsor components store data in memory about the current statethey can modify the state as well as access its historythe following example is stateful// statefulvar number = 1function increment{ return number++}increment// global variable modifiednumber = 2statelessstateless functions or components perform tasks as though running them for the first timeevery timethis means they do not reference or utilize any information from earlier in their executionstatelessness enables referential transparencyfunctions depend only on their arguments and do not access or need knowledge of anything outside their scopepure functions are statelesssee the following example// statelessvar number = 1n{ return n + 1number// global variable not modifiedreturns 2stateless applications do still manage statehoweverthey return their current state without mutating previous statethis is a tenet of functional programmingstate takeawaysstate management is important for any complex applicationstateful functions or components modify state and store historybut are more difficult to test and debugstateless functions rely only on their inputs to produce outputsa stateless program returns new state rather than modifying existing stateto learn more about statestateadvantages of stateless programmingstateful and stateless componentsthe missing manualreduxpredictable state container for javascript appsimmutability and mutabilitythe concepts of immutability and mutability are slightly more nebulous in javascript than in some other programming languagesyou will hear a lot about immutability when reading about functional programming in jsits important to know what these terms mean classically and also how they are referenced and implemented in javascriptthe definitions are simple enoughimmutableif an object is immutableits value cannot be modified after creationmutableif an object is mutableits value can be modified after creationby designimmutability and mutability in javascriptin javascriptstrings and number literals are immutable by designthis is easily understandable if we consider how we operate on themvar str =hellovar anotherstr = strsubstring// resultstr =unchangedanotherstr =llonew stringusing themethod on our hellostring does not modify the original stringinsteadit creates a new stringwe could reassign the str variable value to something elsebut once weve created our hellostringit will always be hellonumber literals are immutable as wellthe following will always have the same resultvar three = 1 + 2three = 3under no circumstances could 1 + 2 evaluate to anything other than 3this demonstrates that immutability by design does exist in javascriptjs developers are aware that the language allows most things to be changedobjects and arrays are mutable by designconsider the followingvar arr = [13]arrpush4arr = [134]var obj = { greeting}objname =jonobj = { greetingname}in these examplesthe original objects are mutatednew objects are not returnedto learn more about mutability in other languagescheck out mutable vs immutable objectsin practiceimmutability in javascriptfunctional programming in javascript has gained a lot of momentumbut by designjs is a very mutablemulti-paradigm languagefunctional programming emphasizes immutabilityother functional languages will raise errors when a developer tries to mutate an immutable objectso how can we reconcile the innate mutability of js when writing functional or functional reactive jswhen we talk about functional programming in jsthe wordimmutableis used a lotbut its the responsibility of the developer to write their code with immutability in mindredux relies on a singleimmutable state treejavascript itself is capable of mutating the state objectto implement an immutable state treewe need to return a new state object each time the state changesjavascript objects can also be frozen with objectfreezeto make them immutablenote that this is shallowmeaning object values within a frozen object can still be mutatedto further ensure immutabilityfunctions like mozillas deepfreezeand npm deep-freeze can recursively freeze objectsfreezing is most practical when used in tests rather than in application jstests will alert developers when mutations occur so they can be corrected or avoided in the actual build without objectfreeze cluttering up the core codethere are also libraries available to support immutability in jsmori delivers persistent data structures based on clojurejs by facebook also provides immutable collections for jsutility libraries like underscorejs and lodash provide methods and modules to promote a more immutable functional programming styleimmutability and mutability takeawaysoveralljavascript is a very mutable languagesome styles of js coding rely on this innate mutabilitywhen writing functional jsimplementing immutability requires mindfulnessjs will not natively throw errors when you modify something unintentionallytesting and libraries can assistbut working with immutability in js takes practice and methodologyimmutability has advantagesit results in code that is simpler to reason aboutit also enables persistencythe ability to keep older versions of a data structure and copy only the parts that have changedthe disadvantage of immutability is that many algorithms and operations cannot be implemented efficientlyto learn more about immutability and mutabilityimmutability in javascriptimmutable objects with object freezemutable vs immutable objectsusing immutable data stuctures in javascriptgetting started with reduxincludes examples for addressing immutable stateimperative and declarative programmingwhile some languages were designed to be imperativecphpor declarativesqlhtmljavascriptand others like java and c#can support both programming paradigmsmost developers familiar with even the most basic javascript have written imperative codeinstructions informing the computer how to achieve a desired resultif youve written a for loopyouve written imperative jsdeclarative code tells the computer what you want to achieve rather than howand the computer takes care of how to achieve the end result without explicit description from the developerve used arraymapve written declarative jsimperative programmingimperative programming describes how a programs logic works in explicit commands with statements that modify the program stateconsider a function that increments every number in an array of integersan imperative javascript example of this might befunction incrementarray{ let resultarr = []forlet i = 0i <lengthi++{ resultarrarr[i] + 1} return resultarr}this function shows exactly how the functions logic workswe iterate over the array and explicitly increase each numberpushing it to a new arraywe then return the resulting arraythis is a step-by-step description of the functions logicdeclarative programmingdeclarative programming describes what a programs logic accomplishes without describing howa very straightforward example of declarative programming can be demonstrated with sqlwe can query a database tablepeoplefor people with the last name smith like soselect * from people where lastname =smiththis code is easy to read and describes what we want to accomplishthere is no description of how the result should be achievedthe computer takes care of thatnow consider the incrementarrayfunction we implemented imperatively abovelets implement this declaratively now{ return arritem =>item + 1}we show what we want to achievebut not how it worksthe arraymethod returns a new array with the results of running the callback on each item from the passed arraythis approach does not modify existing valuesnor does it include any sequential logic showing how it creates the new arrays mapreduceand filter are declarativefunctional array methodsutility libraries like lodash provide methods like takewhileuniqzipand more in addition to mapand filterimperative and declarative programming takeawaysas a languagejavascript allows both imperative and declarative programming paradigmsmuch of the js code we read and write is imperativewith the rise of functional programming in jsdeclarative approaches are becoming more commondeclarative programming has obvious advantages with regard to brevity and readabilitybut at the same time it can feel magicalmany javascript beginners can benefit from gaining experience writing imperative js before diving too deep into declarative programmingto learn more about imperative and declarative programmingimperative vs declarative programmingwhats the difference between imperativeproceduraland structured programmingimperative andfunctionaldeclarative js in practicejavascriptand filterhigher-order functionsa higher-order function is a function thataccepts another function as an argumentorreturns a function as a resultin javascriptfunctions are first-class objectsthey can be stored and passed around as valueswe can assign a function to a variable or pass a function to another functionconst double = function{ return x * 2}const timestwo = doubletimestworeturns 8one example of taking a function as an argument is a callbackcallbacks can be inline anonymous functions or named functionsconst mybtn = documentgetelementbyidmybutton// anonymous callback functionmybtnaddeventlistenerclickfunctione{ console`click event${e}`// named callback functionfunction btnhandler{ console}mybtnbtnhandlerwe can also pass a function as an argument to any other function we create and then execute that argumentfunction sayhihi}function greetgreeting{ greeting}greetsayhi// alertswhen passing a named function as an argumentas in the two examples abovewe dont use parenthesesthis way were passing the function as an objectparentheses execute the function and pass the result instead of the function itselfhigher-order functions can also return another functionfunction whenmeetingjohn{ return function{ alert}}var atlunchtoday = whenmeetingjohnatlunchtodayhigher-order function takeawaysthe nature of javascript functions as first-class objects make them prime for facilitating functional programmingto learn more about higher-order functionsfunctions are first class objects in javascripthigher-order functions in javascripthigher-order functions - part 1 of functional programming in javascripteloquent javascript - higher-order functionshigher order functionsfunctional programmingnow weve learned about puritystatelessnessimmutabilitydeclarative programmingand higher-order functionsthese are all concepts that are important in understanding the functional programming paradigmfunctional programming with javascriptfunctional programming encompasses the above concepts in the following wayscore functionality is implemented using pure functions without side effectsdata is immutablefunctional programs are statelessimperative container code manages side effects and executes declarativepure core code**if we tried to write a javascript web application composed of nothing but pure functions with no side effectsit couldnt interact with its environment and therefore wouldnt be particularly usefuls explore an examplesay we have some text copy and we want to get its word countwe also want to find keywords that are longer than five charactersusing functional programmingour resulting code might look something like thisconst fpcopy = `functional programming is powerful and enjoyable to writes very cool// remove punctuation from string const strippunctuation =str=>replace/[/#$%^&{}=-_`~]/g// split passed string on spaces to create an arrayconst getarr =split// count items in the passed arrayconst getwordcount =// find items in the passed array longer than 5 characters// make items lower caseconst getkeywords =filteritemlength >5tolowercase// process copy to prep the stringcreate an arraycount wordsand get keywordsfunction processcopyprepfnarrfncountfnkwfn{ const copyarray = arrfn`word count${countfncopyarray}``keywords${kwfn}processcopyfpcopystrippunctuationgetarrgetwordcountgetkeywordsword count11// resultkeywordsprogrammingpowerfulenjoyablethis code is available to run at this jsfiddlefunctional programming with javascripts broken into digestibledeclarative functions with clear purposeif we step through it and read the commentsno further explanation of the code should be necessaryeach core function is modular and relies only on its inputspurethe last function processes the core to generate the collective outputsthis functionprocesscopyis the impure container that executes the core and manages side effectsve used a higher-order function that accepts the other functions as arguments to maintain the functional stylefunctional programming takeawaysimmutable data and statelessness mean that the programs existing state is not modifiednew values are returnedpure functions are used for core functionalityin order to implement the program and handle necessary side effectsimpure functions can call pure functions imperativelyto learn more about functional programmingintroduction to immutablejs and functional programming conceptsfunctional programming for the rest of usfunctional programming with javascriptdont be scared of functional programmingso you want to be a functional programmerlodash - functional programming guidewhat is the difference between functional and imperative programming languageseloquent javascript1st edition - functional programmingfunctional programming by examplefunctional programming in javascript - video seriesintroduction to functional javascripthow to perform side effects in pure functional programmingpreventing side effects in javascriptobservablesobservables are similar to arraysexcept instead of being stored in memoryitems arrive asynchronously over timealso called streamswe can subscribe to observables and react to events emitted by themjavascript observables are an implementation of the observer patternreactive extensionscommonly known as rx*provides an observables library for js via rxjsto demonstrate the concept of observabless consider a simple exampleresizing the browser windows easy to understand observables in this contextresizing the browser window emits a stream of events over a period of time as the window is dragged to its desired sizewe can create an observable and subscribe to it to react to the stream of resize events// create window resize stream// throttle resize eventsconst resize$ = rxobservablefromeventwindowresizethrottletime350// subscribe to the resize$ observable// log window width x heightconst subscription = resize$subscribeevent{ let t = eventtarget`${tinnerwidth}px x ${tinnerheight}px`the example code above shows that as the window size changeswe can throttle the observable stream and subscribe to the changes to respond to new values in the collectionthis is an example of a hot observablehot observablesuser interface events like button clicksmouse movementare hothot observables will always push even if were not specifically reacting to them with a subscriptionthe window resize example above is a hot observablethe resize$ observable fires whether or not subscription existscold observablesa cold observable begins pushing only when we subscribe to itif we subscribe againit will start overs create an observable collection of numbers ranging from 1 to 5// create source number streamconst source$ = rxrange// subscribe to source$ observableconst subscription = source$value`next${value}`// onnext`error${event}`// onerrorcompleted} // oncompletedwe can subscribeto the source$ observable we just createdupon subscriptionthe values are sent in sequence to the observerthe onnext callback logs the valuesnextuntil completionthe cold source$ observable we created doesnt push unless we subscribe to itobservables takeawaysobservables are streamswe can observe any streamfrom resize events to existing arrays to api responseswe can create observables from almost anythinga promise is an observable with a single emitted valuebut observables can return many values over timewe can operate on observables in many waysrxjs utilizes numerous operator methodsobservables are often visualized using points on a lineas demonstrated on the rxmarbles sitesince the stream consists of asynchronous events over times easy to conceptualize this in a linear fashion and use such visualizations to understand rx* operatorsthe following rxmarbles image illustrates the filter operatorto learn more about observablesobservablecreating and subscribing to simple observable sequencesthe introduction to reactive programming youve been missingrequest and responseintroducing the observablerxmarblesrx book - observableintroducing the observablereactive programmingreactive programming is concerned with propagating and responding to incoming events over timedeclarativelydescribing what to do rather than howreactive programming is often associated with reactive extensionsan api for asynchronous programming with observable streamsabbreviated rx*provides libraries for a variety of languagesincluding javascriptrxjsreactive programming with javascripthere is an example of reactive programming with observabless say we have an input where the user can enter a six-character confirmation code and we want to print out the latest valid code attemptour html might look like this<-- html -->input id="confirmation-code"type="text">p>strong>valid code attempt/strong>code id="attempted-code"/code>/p>ll use rxjs and create a stream of input events to implement our functionalitylike so// jsconst confcodeinput = documentconfirmation-codeconst attemptedcode = documentattempted-codeconst confcodes$ = rxconfcodeinputinpute =>code =>codelength === 6const subscription = confcodes$attemptedcodeinnertext = valuewarninfothis code can be run at this jsfiddlereactive programming with javascriptll observe events from the confcodeinput input elementthen well use the map operator to get the value from each input eventll filter any results that are not six characters so they wont appear in the returned streamfinallyll subscribe to our confcodes$ observable and print out the latest valid confirmation code attemptnote that this was done in response to events over timethis is the crux of reactive programmingreactive programming takeawaysthe reactive programming paradigm involves observing and reacting to events in asynchronous data streamsrxjs is used in angular and is gaining popularity as a javascript solution for reactive programmingto learn more about reactive programmingthe introduction to reactive programming youve been missingintroduction to rxthe reactive manifestounderstanding reactive programming and rxjsreactive programmingmodernization of reactivityreactive-extensions rxjs api corefunctional reactive programmingin simple termsfunctional reactive programming could be summarized as declaratively responding to events or behaviors over timeto understand the tenets of frp in more depths take a look at frps formulationll examine its use in relation to javascriptwhat is functional reactive programminga more complete definition from conal elliotfrps formulatorwould be that functional reactive programming isdenotative and temporally continuouselliot mentions that he prefers to describe this programming paradigm as denotative continuous-time programming as opposed tofunctional reactive programmingat its most basicoriginal definitionhas two fundamental propertiesdenotativethe meaning of each function or type is precisesimpleand implementation-independentreferences thiscontinuous timevariables have a particular value for a very short timebetween any two points are an infinite number of other pointsprovides transformation flexibilityefficiencymodularityand accuracyreactiveagainwhen we put it simplyfunctional reactive programming is programming declaratively with time-varying valuesto understand continuous time / temporal continuityconsider an analogy using vector graphicsvector graphics have an infinite resolutionunlike bitmap graphicsdiscrete resolutionvector graphics scale indefinitelythey never pixellate or become indistinct when particularly large or small the way bitmap graphics dofrp expressions describe entire evolutions of values over timerepresenting these evolutions directly as first-class values—conal elliotfunctional reactive programming should bedynamiccan react over time or to input changestime-varyingreactive behaviors can change continually while reactive values change discretelyefficientminimize amount of processing necessary when inputs changehistorically awarepure functions map state from a previous point in time to the next point in timestate changes concern the local element and not the global program stateconal elliots slides on the essence and origins of frp can be viewed herethe programming language haskell lends itself to true frp due to its functionaland lazy natureevan czaplickithe creator of elmgives a great overview of frp in his talk controlling time and spaceunderstanding the many formulations of frpin facts talk briefly about evan czaplikis elmelm is a functionaltyped language for building web applicationsit compiles to javascriptcssand htmlthe elm architecture was the inspiration for the redux state container for js appselm was originally considered a true functional reactive programming languagebut as of version 017it implemented subscriptions instead of signals in the interest of making the language easier to learn and usein doing soelm bid farewell to frpfunctional reactive programming and javascriptthe traditional definition of frp can be difficult to graspespecially for developers who dont have experience with languages like haskell or elmthe term has come up more frequently in the front-end ecosystemso lets shed some light on its application in javascriptin order to reconcile what you may have read about frp in jss important to understand that rx*baconjsangularand others are not consistent with the two primary fundamentals of conal elliots definition of frpelliot states that rx* and baconjs are not frpthey arecompositional event systems inspired by frpas it relates specifically to javascript implementationsrefers to programming in a functional style while creating and reacting to streamsthis is fairly far from elliots original formulationwhich specifically excludes streams as a componentbut is nevertheless inspired by traditional frps also crucial to understand that javascript inherently interacts with the user and uithe domand often a backendside effects and imperative code are par for the courseeven when taking a functional or functional reactive approachwithout imperative or impure codea js web application with a ui wouldnt be much use because it couldnt interact with its environments take a look at an example to demonstrate the basic principles of frp-inspired javascriptthis sample uses rxjs and prints out mouse movements over a period of ten seconds// create a time observable that adds an item every 1 second// map so resulting stream contains event valuesconst time$ = rxtimer01000timeinterval// create a mouse movement observable// throttle to every 350ms// map so resulting stream pushes objects with x and y coordinatesconst move$ = rxdocumentmousemove{ return {xclientxyclienty} }// merge time + mouse movement streams// complete after 10 secondsconst source$ = rxmergetime$move$takeuntilrx10000// subscribe to merged source$ observable// if value is a numbercreatetimeset// if value is a coordinates objectaddpointconst subscription = source${ iftypeof x ==={ createtimeset} else { addpoint} }errerror// oncompleted// add element to dom toout points touched in a particular secondfunction createtimeset{ const elem = documentcreateelementdivconst num = n + 1elemid =t+ numinnerhtml = `<${num}<bodyappendchild}// add points touched to latest time in streamfunction addpointpointobj{ // add point to last appended element const numberelem = documentgetelementsbytagname[0]lastchildnumbereleminnerhtml += `${pointobjx}y}}you can check out this code in action in this jsfiddlefrp-inspired javascriptrun the fiddle and move your mouse over the result area of the screen as it counts up to 10 secondsyou should see mouse coordinates appear along with the counterthis indicates where your mouse was during each 1-second time intervals briefly discuss this implementation step-by-stepfirstll create an observable called time$this is a timer that adds a value to the collection every 1000msevery secondwe need to map the timer event to extract its value and push it in the resulting streamll create a move$ observable from the documentmousemove eventmouse movement is continuousat any point in the sequencethere are an infinite number of points in betweenll throttle this so the resulting stream is more manageablethen we can map the event to return an object with x and y values to represent mouse coordinatesnext we want to merge the time$ and move$ streamsthis is a combining operatorthis way we can plot which mouse movements occurred during each time intervalll call the resulting observable source$ll also limit the source$ observable so that it completes after ten seconds10000msnow that we have our merged stream of time and movementll create a subscription to the source$ observable so we can react to itin our onnext callbackll check to see if the value is a number or notif it iswe want to call a function called createtimesetif its a coordinates objectll call addpointin the onerror and oncompleted callbacksll simply log some informations look at the createtimesetll create a new div element for each second intervallabel itand append it to the domin the addpointll print out the latest coordinates in the most recent timeset divthis will associate each set of coordinates with its corresponding time intervalwe can now read where the mouse has been over timethese functions are impurethey have no return value and they also produce side effectsthe side effects are dom manipulationsas mentioned earlierthe javascript we need to write for our apps frequently interacts with scope outside its functionsfunctional reactive programming takeawaysfrp encodes actions that react to events using pure functions that map state from a previous point in time to the next point in timefrp in javascript doesnt adhere to the two primary fundamentals of conal elliots frpbut there is certainly value in abstractions of the original conceptjavascript relies heavily on side effects and imperative programmingbut we can certainly take advantage of the power of frp concepts to improve our jsconsider this quote from the first edition of eloquent javascriptthe second edition is available herefu-tzu had written a small program that was full of global state and dubious shortcutsreading ita student askedyou warned us against these techniquesyet i find them in your programhow can this befu-tzu saidthere is no need to fetch a water hose when the house is not on fire{this is not to be read as an encouragement of sloppy programmingbut rather as a warning against neurotic adherence to rules of thumb—marijn haverbeke1st editionchapter 6to learn more about functional reactive programmingfunctional reactive programming for beginnersthe functional reactive misconceptionwhat is functional reactive programminghaskell - functional reactive programmingcomposing reactive animationsspecification for a functional reactive programming languagea more elegant specification for frpfunctional reactive programming for beginnerselm - a farewell to frpearly inspirations and new directions in functional reactive programmingbreaking down frprx* is not frpconclusionwell conclude with another excellent quote from the first edition of eloquent javascripta student had been sitting motionless behind his computer for hoursfrowning darklyhe was trying to write a beautiful solution to a difficult problembut could not find the right approachfu-tzu hit him on the back of his head and shoutedtype somethingthe student started writing an ugly solutionafter he had finishedhe suddenly understood the beautiful solutionchapter 6the concepts necessary for understanding functional programmingand functional reactive programming can be difficult to grasplet alone masterwriting code that takes advantage of a paradigms fundamentals is the initial stepeven if it isnt entirely faithful at firstpractice illuminates the path ahead and also reveals potential revisionswith this glossary as a starting pointyou can begin taking advantage of these concepts and programming paradigms to increase your javascript expertiseif anything is still unclear regarding these topicsplease consult the links in each section for additional resourcesll cover more concepts in the next modern js glossary post", "image" : "https://cdn.auth0.com/blog/js-fatigue/JSLogo.png", "date" : "February 14, 2017" } , { "title" : "Making use of RxJS in Angular", "description" : "Angular is built on the top of RxJS. Learn how you can make use of RxJS in your Angular apps for a clearer and more maintainable codebase.", "author_name" : "Wojciech Kwiatek", "author_avatar" : "https://en.gravatar.com/userimage/102277541/a28d70be6ae2b9389db9ad815cab510e.png?size=200", "author_url" : "https://twitter.com/WojciechKwiatek", "tags" : "angular", "url" : "/making-use-of-rxjs-angular/", "keyword" : "tldr angularpreviously known as angular 2incorporates rxjs and uses it internallywe can make use of some rxjs goodies and introduce frp to write more robust code in our appsif youre new to rxjsi recommend reading understanding reactive programming and rxjs before proceedingrxjs is all about streamsoperators to modify themand observablesfunctional reactive programmingfrpfrp has recently become a buzzwordto give you a deeper understanding on that topicthere is an awesome post from andre stalz -- the introduction to reactive programming youve been missingwhat is the key takeaway from the this comprehensive postreactive programming is actually programming with asynchronous data streamsbut where does the word functional come into playfunctional is about how we can modify these streams to create new sets of dataa stream can be used as an input to another streamwe have a bunch of operators in rxjs to do things like thissocan we do some of frp with rxjsthe short answer isyesand well do so with angularrxjs in angularto get started with rxjs in angularall we need to do is import the operators we want to usetrxjs is itself an angular dependency so its ready to use out of the boxpassing observables to the viewwe are about to start with some observables created ad hoclets create an observable from the javascript arrayconst items = observableof[123]nowwe can use the created observable as a components property and pass it into the viewangular introduced a new filterwhich will be a perfect fit hereits called asyncits purpose is to unwrap promises and observablesin the case of an observable itll pass the last value of the observableimport { component } from@angular/coreimport { observable } fromrxjs/rx@component{ selectormy-apptemplate` <ul><li *ngfor=let item of itemsasync>/li>/ul>`}export class appcomponent { public items = observable}we should see aof elements in the browserthis is our hello world example to see how async works and how we can use ithttpangular relies on rxjs for some of its internal featuresone of the most well-known services is httpin angular 1xhttp was a promise-based servicein angular 2+s based on observablesthis means that we can also make use of the async pipe heres try to create a real-world example with a servicewe want to fetch aof repos authored by auth0 on githubimport { injectable } fromimport { http } from@angular/httpimportrxjs/add/operator/map@injectableexport class reposervice { constructorprivate _httphttp{} getreposforuseruserstringobservable<any>{ return this_httpget`https//apigithubcom/users/${user}/repos`mapresany=>json}}herewe have the servicewhich exposes the getreposforuser method to make an http callnote the return type of the method -- its an observable<we can add it into the module and use it in the componentimport { reposervice } from/reposervicejs` `}export class appcomponent { public reposconstructorreposervice{ thisrepos = reposervicegetreposforuserauth0consolelogthisrepos}}something important has just happenedyou can take a look into the network tab of your developer tools in the browserno call was mades add the for loop with the async pipelet repo of repos`}}now the call for repositories is firedand we can see that theof repos has been fetched correctlywhy is thathot and cold observablesthe httpget observable above is coldthat means each subscriber sees the same events from the beginnings independent of any other subscriberit also means that if theres no subscriberno value is emittedsee this one in actionsubscribe}}now youll be able to see three callsyou can now see one more thing -- async makes a subscription under the hoodon the other handwe have hot observablesthe difference isno matter how many subscribers there arethe observable starts just onceand we can make our observable hotinstead of coldby using the share operator//rxjs/add/operator/shareprivate httpshare}}now you should see just one callif you want to go deeper with the topichere is a hot vs cold observables article by ben leshprogramming the reactive way in angularhandling eventsweve covered how youve probably used rxjs observables for http in angulareven if you werent aware of ithoweverthere are many more things you can do with streamseven if angular doesnt require you to do sonow we move on to the on click eventsthe traditionalimperative way of handling click events in angular is as followsbuttonclick=handlebuttonclick1up vote </button>export class appcomponent { handlebuttonclickvaluenumber{ console}}we can create a stream of click events using rxjs subjectsubject is both an observer and an observable at the same timeit means it can emit valueusingnextand you can subscribe to itusing subscribehereyou can see the same case achieved with functional approach using rxjscounter$export class appcomponent { public counter$number>= new subject<bind}}its not much different than the previous onethoughs try to add some more logic therelike making sum of clicks and printing some text instead of just numberstring>scanacccurrentnumber =>acc + currentstring =>`sum of clicks${value}`}}the key point is that we define how the clicks stream will behavewe say that we dont really need clicks but only the sum of them with some prepended textand this sum will be our streamnot the pure click eventsand we subscribe to the stream of summed valuesin other wordsthe key of functional programming is to make the code declarativenot imperativecommunication between componentslets briefly address communication between angular components using an rxjs approachs actually about dumb components in the rxjs approach of an angular worldlast time i described the change detection of angular and what we can do with it to fine-tune the appwell add the component with clicks$ stream as the inputimport { changedetectionstrategycomponentinput } frommy-scoresummarychangedetectionchangedetectionstrategyonpush}export class scorecomponent { @inputpublic scorenumber}note that the component has changedetectionstrategyonpush turned onso this means that we assume that the new reference will come as the inputthe component accepts a numeric parameterbut there is no reference to streamswe can handle this with the async pipemy-score [score]=/my-score>}formsanother place when you can use the power of rxjs is formswe can use all of the knowledge that we have gained up to this point and see how we can create a reactive login formfirsts start with adding reactiveformsmodule from @angular/forms to the modulethen we can make use of the reactive forms introduced in angulars how it can lookimport { formbuilderformgroup } from@angular/formsform [formgroup]=loginformlabel>login/label>input formcontrolname=type=textpasswordbutton type=submitsubmit</form>export class appcomponent implements oninit { public loginformformgroup constructorprivate formbuilderformbuilder{} ngoninitloginform = thisgroup{ login}}we now have a few additional blocks- formcontrolname -- added to match names from templates to the appropriate fields in the controller- formbuildergroup -- creates the form- [formgroup] -- connects the template and the controllerwe can now use the valuechanges observablevaluechangeseach changed field will emit an event and will be logged to the consolethis offers many possibilities since we can take advantage of any operator that rxjs providesin this examples focus on submitting the form in a reactive waywe can puton the formsubmit$---->export class appcomponent { public loginformformgroup private submit$= new subjectwe now have a stream of submit events and a stream of valuesall that remains is to combine these streamsthe resulting stream will emit the current state of the fields when the form is submittedthe desired behavior can be achieved by using the withlatestfrom operator of rxjsthe combined stream is as followswithlatestfrom_valuesvalues =>{ consolesubmitted valueswe now have combined streamsand the logic is consolidatedit can be written in a single linejust to recaphere is the final code for the form component}}conclusionangular has a lot more features than meets the eyerxjs isin my personal opinionone of the best of themit can rocket the app to the next level in term of maintainability and claritythe future is more declarativeless imperative coderxjs can appear intimidating at firstbut once youre familiar with its functionality and operatorsit supplies many benefitssuch as defining logic at the declaration timeall of this is to saythe code is easier to understand compared to imperative coderxjs requires a different mode of thinkingbut it is very worthwhile to learn and use", "image" : "https://cdn.auth0.com/blog/reactive-programming/logo.png", "date" : "February 13, 2017" } , { "title" : "Build The Ultimate Account Based Marketing Machine with Account Selection", "description" : "Learn various strategies on how to reach the right people through calculated account selection.", "author_name" : "Brandon Redlinger", "author_avatar" : "https://cdn.auth0.com/blog/ultimate-abm-machine/brandon-redlinger.png", "author_url" : "https://twitter.com/Brandon_Lee_09", "tags" : "account-based-marketing", "url" : "/ultimate-account-based-marketing-machine-with-account-selection/", "keyword" : "david ogilvy was far ahead of this timehe was known as the king of madison avenuethough often viewed by his peers as eccentricand even bizarrehe was known to wear capes to board meetingshe got many things rightone particular quote that still stands out and is applicable in todays b2b word is this“dont count the people you reachreach the people that count”traditional demand gen has been about reaching more peoplehoweverwhen taking an account based approachbeing able to reach the right people is one of the core principlesselecting accounts is a combination of art and scienceintuition and logiccompanies combine gut feelhistorical performanceand sometimes predictive data science to develop an ideal customer profiletier their accountsthen allocate their resources properly to work said accountsget this wrongand youll be burning moneyget this rightll be printing money at willthats why this is the foundation for every good account based marketing programin factit goes beyond thatits the core of account based everythingabehow should you select your target accountsthe process of choosing target accounts comes down to a specific definitionicpof companies that best match your goalsthis definition includes the key dimensions that define high-value accounts that are most likely to buyincluding things likefirmographicstechnographicspain pointsbehaviorsintent datastrategyone way to decide on your icp is to reverse engineer your existing best customers to see what they have in commonanother is to analyze the best customers of your closest competitorswhere are they winning and whyonce you know the specific profile of an ideal accounts time to actually pick the accountsto name the companies youll be targetingon this fronttheres a maturity spectrumwith increasing accuracy and sophistication as you move upthe accuracy and completeness of your account selection improves as you move up the maturity spectrumdo what you need to dobut be advisedtimemoney and effort spent on rigorous account selection will be repaid many times over in the number and quality of opportunities you generate“data is a never ending problemprospectors have a clear idea of which companies should be a good tand which shouldnt” - aaron rosspredictable revenuehow should you tier your target accountsnot all accounts are the sameso youll also need to organize your target accounts into tiersbased on how valuable they might beand how much research and personalization will go into each onewe take a tiered approach where tier 1 is a classic abe/abm approachtier 2 is a lighter approachand tier 3 is a hybrid approachtier 1these accounts get the “full” account based everything treatmentmeaning each one gets deep researcha customized planpersonalized contentbespoke campaignsand lots of one-to-one attentionyou map out each buying centerunderstand where there may be revenue potentialbuild out the organization chart and see which contacts you know and which you need to knowresearch key business priorities and individual motivationsand identify relationships and connections to the accountyou publish detailed account dossiersmaintain them quarterlyand even have internal chat groups or forums dedicated to each accounttier 2these accounts also get individual researchbut perhaps its limited to a few key talking points for each accountthese accounts may not get completely personalized plays andcontentbut they should still get highly relevant touches based on their industry and personainstead of one-to-one campaignsthese accounts get one-to-few campaignsinstead of fully bespoke contentperhaps you take content written for their industry and customize it with their logo on the cover and a personalized first and last paragraphtier 3this style covers all the accounts that you want to target but dont have the resources for personalization and customizationitsma calls this programmatic abms basically traditional marketing with account-level targetingthe key difference from demand gen is that instead of scoring leadsyou track account-level engagement and wait until the account hits a sufficient threshold to label them a marketing qualified accountmqahow many accounts should you have in each tierwhen determining who youll target in your account based everything programan important decision is how many accounts you should be targeting within each tier of your programlike most decisions related to strategythere is no one-size-fits-all answerthe number of accounts you choose to target for each tier in your abe programand the number of accounts per ae and sdrwill depend on things likeyour expected deal sizesthe length of the sales cycleyour available sales resourcesyour current level of engagement with major prospectsthe intensiveness of your account based strategyhow should you allocate your resourceswe think the best way to select target accounts is by looking at how many resources you have to investthis depends on how you handle the different tiers or styles of abea given enterprise account executive may only be handle a few tier 1 accountsbut a corporate rep could probably handle a few hundred tier 3 target accounts at a timethe right number of accounts is the number that your team can handle in a tier-appropriate waywe know one company where management felt their reps could have 100 named accounts at a timebut they gave each one 150 accounts so the reps wouldnt feel like their territories were too smallacross a sample of engagio customersthe median number of accounts per account owner is 50quite a few engagio customers have a lower number20 to 30 accounts per account ownerand quite a few have 100 or more accounts per owneras you can sees a lot that goes into establishing the base for your account based everything programweve just scratched the surface of the first stepto learn more about the whoand the what and the whereof account based sales developmentdownload the clear &complete guide to account based sales developmenthow many accounts does your team manageon averagehow did you make this decisionabout brandon redlingerbrandon redlinger is the director of growth at engagiothe account based everything platform that orchestrates human connectionshe is passionate about the intersection between tech and psychologyespecially as it applies to growing businessesyou can follow him on twitter @brandon_lee_09 or connect with him on linkedin", "image" : "https://cdn.auth0.com/blog/ultimate-abm-machine/ABM-logo.png", "date" : "February 10, 2017" } , { "title" : "Migrating a PHP 5 App to PHP 7 (Tools & Implementation) - Part 3", "description" : "Let's go through migrating a simple PHP 5 app to PHP 7", "author_name" : "Prosper Otemuyiwa", "author_avatar" : "https://en.gravatar.com/avatar/1097492785caf9ffeebffeb624202d8f?s=200", "author_url" : "http://twitter.com/unicodeveloper?lang=en", "tags" : "php5", "url" : "/migrating-a-php5-app-to-7-part-three/", "keyword" : "tldrmany php applications are still running on php 5xnot ready to take full advantage of the awesome features that php 7 offersa lot of developers have not made the switch because of certain fears of compatibility issuesmigration challenges and the strange awkward feeling that migrating will take away a big chunk of their timein the first part of this tutorial we learned how to set up a php 7 development environmentin the second part of this tutorialwe discussed extensively about all the new features php 7 offers and the language constructs and features that have been either removed or deprecatedthis timewell show you how you can leverage all the new php 7 features when migrating and also the tools that will help to make the process painlessyou need to be aware that for the most partphp 5x code can run on php 7in php 7there are some backwards incompatible changesso applications built with php 5x that use functions and language constructs that have been removed or have the internal implementation changed drastically will spit out errors while trying to run on php 7tools to aid migrationone of the most frustating part of our jobs as software developers is having to work on large old codebasesin a situation where you are tasked with migrating a large php 5x application that has probably been in existence for about 10 yearshow would you go about itthe easiest and most obvious way of migrating is to initially clone the app on your local machineinstall php 7 and run the appyou can walk through the errors and deperaction warnings shown in the terminaland manually fix them step-by-step by incorporating php 7 featuresthis can be very challenging and time consumingwhy cant we automate this processcurrently there is no tool out there that performs a 100% automatic conversion of your php 5x codebase to php 7but these tools below will help in making your migration painlessphp 7 marphp7mar is a command-line tool that generates reports on php 5x codebase based on php 7 compatibilitythe reports contain line numbersissues notedand suggested fixes along with documentation linksnotethe tool does not fix codeit only gives you reports about all the php files in your codebasehappy fixingphp 7 compatibility checkerphp7cc is a command-line tool designed to make migration from php 53 - 56 to php 7 really easyphp7cc reportserrorsfatalsyntaxnoticethese are highlighted in redwarningsthese are highlighted in yellowphanphan is a static analyzer for php that attempts to prove incorrectness rather than correctnessphan looks for common issues and verifies type compatibility on various operations when type information is available or can be deducedphan checks for lots of things including php7/php5 backward compatibilityphpto7aidphpto7aid is a tool that is used to identify php 5 code that will not work in php 7it tries to aid you as much as possible in resolving these issuesby either providing the exact solution or giving hints on how to solve the issuephpstorm php 7 compatibility inspectionphpstorm is a very smart php idedeveloped by jetbrainssourcejetbrainscomphpstorm 10 comes with a php 7 compatibility inspection tool that can show you exactly what code is going to cause errors if you are running php7comthe image below shows a typical example of an application that has classes with names that are reserved in php 7selecting run inspection by name option from the code menuand then selecting the php 7 compatibility section will give you results like this one belowcombuilding a php5 appwe will build the first simple php 5 app very quicklythis is the scope of the appa user will be able to register on the appa user will be able to log into the appa user will be assigned a random star wars code namea user will be able to log out of the appbuilding this app will require us to set up a database to store the userswrite our registration and login code and manage the users sessionnowwe wont employ the use of any framework because we dont want any form of overheadordinarilybuilding this app would take a lot of time and setup but there is a service we can use to eliminate the hassleohyeahauth0 to the rescuecreate and configure auth0 clientfirst thing well need to do is sign up for a free auth0 account and configure a new clientnow head over to clients tab and create a new one choosingregular web applicationas the client typelets name it as something likebasic php webappnow that we have our client createdwe need to take note of three propertiesdomainclient id and client secretall of them can be found on the settings tab of the client that weve just createdthe last configuration that we need to dobefore updating our codeis to add http//localhost3000 as an allowed callback urls on our auth0 clientbuild the appcreate a composerjson file in a new directory and add this to it like so{namedescriptionbasic sample for securing a webapp with auth0requirevlucas/phpdotenv230auth0/auth0-php~4}licensemit}composerjsonall we need is the phpdotenv package for reading environment variables and the auth0-php package that makes it easy to use the auth0 servicecreate a public folder inside the directory and add two filesappcss and appjs in itbody { font-familyproxima-novasans-seriftext-aligncenterfont-size300%font-weight100}input[type=checkbox]input[type=radio] { positionabsoluteopacity}input[type=checkbox] + labelinput[type=radio] + label { displayinline-blockbeforeinput[type=radio] + labelbefore { contentdisplayvertical-align-02emwidth1emheightborder15em solid #0074d9border-radiusmargin-right3embackground-colorwhite}input[type=radio] + labelbefore { border-radius50%}input[type=radio]checked + labelinput[type=checkbox]before { background-color#0074d9box-shadowinset 0 0 0 015em whitefocus + labelbefore { outlinebtn { font-size140%text-transformuppercaseletter-spacing1px#16214dcolorbtnhover { background-color#44c7f4focus { outlinenoneimportantbtn-lg { padding20px 30pxdisabled { background-color#333#666}h1h2h3 { font-weight}#logo img { width300pxmargin-bottom60pxhome-description { font-weightmargin100px 0}h2 { margin-top30px40px200%}label { font-size100%300btn-next { margin-topanswer { width70%autoleftpadding-left10%20pxlogin-pagelogin-box { padding5px 0}appcss$documentreadyfunction{ var lock = new auth0lockauth0_client_idauth0_domain{ auth{ redirecturlauth0_callback_urlresponsetypecodeparams{ scopeopenid} }}$btn-loginclicke{ epreventdefaultlockshowjsgo ahead and create ahtaccess file inside the directory like sorewriteengine onrewritecond %{request_filename}-frewritecond %{request_filename}-drewriteruleindexphp [l]create aenv filethis file will contain our auth0 credentialsauth0_domain=blahabababababaauth0comauth0_client_id=xxxxxxxxxauth0_client_secret=auth0_callback_url=http3000replace these values with the client_idclient_secret and domain from your auth0 dashboardadd the value of callback_url to the allowed callback urls in your settings on the dashboardauth0 dashboardallowed callback urlsalsodo not forget to add the same value to the allowed originscorsin your settings on the dashboardallowed origin corswe need a file to invoke the dotenv library and load the values that we have deposited in thecreate a new filedotenv-loaderphp like so<php // readenv try { $dotenv = new dotenvdotenv__dir__$dotenv->load} catchinvalidargumentexception $ex{ // ignore if no dotenv }dotenv-loaderphpfinallys create the indexphp file where all our app logic will residelike i mentioned earlierits just a basic app so dont be worried about separation of concernsthis is how the file should look likephp// require composer autoloaderrequire __dir__'/vendor/autoloadphp'require __dir__/dotenv-loaderuse auth0sdkapiauthentication$domain = getenvauth0_domain'$client_id = getenvauth0_client_id'$client_secret = getenvauth0_client_secret'$redirect_uri = getenvauth0_callback_url'$auth0 = new authentication$domain$client_id$auth0oauth = $auth0->get_oauth_client$client_secret$redirect_uri[ 'persist_id_token'=>truepersist_refresh_token']$starwarsnames = ['darth vader'ahsoka tano'kylo ren'obi-wan kenobi'r2-d2'snoke'$userinfo = $auth0oauth->getuserifisset$_request['logout'{ $auth0oauth->logoutsession_destroyheader"location/">html>head>script src="//codejquerycom/jquery-3minjs"type="text/javascript"/script>https//cdncom/js/lock/100/lockscript type="src="//usetypekitnet/iws6ohytry{typekit}catch{}<meta name="viewport"content="width=device-widthinitial-scale=1"link rel="icon"image/png"href="/favicon-32x32png"sizes="32x32"-- font awesome from bootstrapcdn -->link href="//maxcdnbootstrapcdncom/bootstrap/36/css/bootstrapcss"rel="stylesheet"com/font-awesome/450/css/font-awesomescript>var auth0_client_id = 'php echo getenvauth0_client_id"var auth0_domain = 'auth0_domain"var auth0_callback_url = 'auth0_callback_url"public/app/head>body class="home"div class="container"login-page clearfix"php if$userinfologin-box auth0-box before"img src="com/blog/app/star_warsapp/>p>heard you don't want to migrate to php 7dare us/p>a class="btn btn-primary btn-login"signin</a>/div>php elselogged-in-box auth0-box logged-in"h1 id="logo"star wars welcomes you to the family/h1>img class="avatar"width="200"php echo $userinfo['picture'h2>welcome <span class="nickname"nickname'/span>/h2>assigned codenameb>php echo $starwarsnames[rand6/b>btn btn-primary btn-lg"logout"logout<php endif/body>/html>relaxs analyze the code together// require composer autoloaderrequire __dir__phpthis is where we require the dotenv loader and composer autoloaderthe autoloader makes it possible for us to import any class from the php packages installed in the appauth0_client_secret[persist_id_tokenpersist_refresh_token$starwarsnames = [darth vaderahsoka tanokylo renobi-wan kenobir2-d2snokeauthentication is the auth0 authentication classit has the methods to retrieve a users profile when logged in$redirect_uri are variables that will house the values gotten from theenv file with the aid of the getenv methodthenwe moved on to instantiating the authentication classthe $auth0->method by default stores user information in the php sessionand we also instructed it to save the access_token and id_token$starwarsnames array contains some characters from star warslater in the codea user will be assigned a random code name from this array$auth0oauth->retrieves the user information$_request[/}this checks if the user submitted a request to log outclears the session and redirects the user back to the homepagewe are making use of auth0 lock widgetand we also using jquery to call the lock methods and handle button click eventpulled in bootstrap and font-awesome for beautificationherewe are feeding the auth0 credentials to javascript variablesin the code aboveif the $userinfo is not setthen it means the user has not logged in yetso we display the signin buttonif the user has signed inthen we grab the users info and display it along with the logout buttonrun the appgo to your terminal and run composer install to install the dependenciesnextrun your php 5x serverif your php server is accessible from the terminalthen you can run it via php -s localhostopen your browser and test the appthe index page should look like thisindex pagenowsignup &signinsign inwhen you are logged inyou should be assigned a star wars codename like sologged inour app is now running successfully on a php 5you can grab the source code from github to ensure that everything works as expectedmigrating our php5 app to php7we are currently running a php 5x apps migrate it to php 7the good thing is that most times you might not have to change anything in the codebases see if that holds true for this appupgrade your server to at least php 70 and run this app againphp 7 server runningapp running on php 7 without any errorsawesomenow our first app is running on php 7 successfullywork with second appthe second php app we will go through is an apiit is a simple chuck norris apiit has been built already with php 5 in mindclone it from github and run composer install to install all the dependenciesthen run the app on a php 5open up postman and test the api like sorun http3000/jokes/categories like soapi showing categoriesrun http3000/jokes/random like soapi showing random jokesthe app is working fineno errorsuse php 7 features in second applets refactor this app and integrate some php 7 featuresthis is the directory structure of our api app at the moment----basic-api----src----main----vendor----gitignorehtaccess----composerjson----index----readmemdthis is how our mainphp file looks like right nowphpnamespace appuse exceptionclass main { public function getcategories{ return $this->getcategorydata} private function getcategorydata{ return [explicitdevmoviefoodcelebritysciencepoliticalsportreligionanimalmusichistorytravelcareermoneyfashion} public function getrandomjokes$randomnumber{ ifis_integer{ throw new exceptionthe random number should be an integerplease try again} $jokes = [jon skeets code doesnt follow a coding conventionit is the coding conventionjon skeet can divide by zerojon skeet points to nullnull quakes in fearjon skeet is the traveling salesmanonly he knows the shortest routewhen jon pushes a value onto a stackit stays pusheddrivers think twice before they dare interrupt jons codejon skeet does not sleep…he waitsjon skeet can stop an infinite loop just by thinking about itjon skeet uses visual studio to burn cdsjon skeet has the key to open sourcehe just doesnt want to close itreturn $jokes[$randomnumber]}}lets start by adding php 7 return type declarations to the methods in this class like soarray { return $this->array { return [string { if}}php 7 return type declarations added in mainphpanother php 7 feature we can add is function parameter typehintingwe have a methodgetrandomjokesthat accepts a $randomnumber which is an integers refactor that methodll eliminate the if condition and just typehint the $randomnumber parameter like sopublic function getrandomjokesint $randomnumberstring { $jokes = [}now if you try to pass in a value asides an integer like so$router->get/jokes/randomuse$app{ echo json_encode$app->dsdsdsphpphp 7 will throw a type error like sophp 7 typeerrorwe have been able to add some php 7 featuresthe app also runs on a php 7 server and everything just works finethe source code of the php 7 version of the api can be found on the php7 branch on githubperformancephp 7 runs on the new zend engine 3thus making your apps see up to 2x faster performance and 50% better memory consumption than php 5it also allows you to serve more concurrent users without adding any hardwarerasmus ledorfcreator of php and inventor of the sql limit clause did some benchmarking with a few popular php projects with the various versions of php from php 54 up until php 70 and also benchmarked against hhvm 31s take a good look at the benchmarksthe test box specs rasmus used aregigabyte z87x-ud3h i7-4771 4 cores @ 350ghz w/ 16g of ram @ 1600mhzhyperthreading enabled for a total of 8 virtual corestoshiba thnsnhh256gbst ssdlinux debian 3160-4-amd64 #1 smp debian 37-ckt9-22015-04-13x86_64 gnu/linuxmysql 524nginx-12 + php-fpm for all tests unless indicated otherwisequiet local 100mbps networksiege benchmark tool run from a separate machinezencart 14moodle 29-devcachettraq 32geeklog 20wardrobe cms 10opencart 20mediawiki 1241phpbb 33wordpress 41drupal 8from the results aboveyou can see that we can make double the amount of requests in less time in php 7 than php 5these specs can be found in the speeding up the web with php 7 talk he gave at fluent conf2015check out the following benchmarksphp7-benchmarksphp7 final version vs hhvm benchmarkhhvm vs php7 performance show down - wordpressnginxconclusionwe have successfully covered how to upgrade your development and server environments from php 5 to php 7gone through the features php 7 offers and also migrated two apps from php 5 to php 7woots been quite a journey highlighting everything php 7 has to offerphp has grown tremendously over the years from a toy language to a full-blown fast and enterprise languagethe php manual and rfc documents remain the most complete go-to reference for any of the new php 7 featuresyou can always leverage them for more information", "image" : "https://cdn.auth0.com/blog/migration/PHPlogo.png", "date" : "February 09, 2017" } , { "title" : "Is Multifactor Authentication The Best Way To Secure Your Accounts? Myths And Reality", "description" : "Multifactor authentication is important, but the question of implementation is more complex than it seems.", "author_name" : "Diego Poza", "author_avatar" : "https://avatars3.githubusercontent.com/u/604869?v=3&s=200", "author_url" : "https://twitter.com/diegopoza", "tags" : "mfa", "url" : "/is-multifactor-authentication-the-best-way-to-secure-your-accounts-myths-and-reality/", "keyword" : "introin recent yearsmultifactor authentication has become quite the buzzword in information securityproducts from twitter to instagram have implemented their own two-step login processesresponding to widespread user demand for better security and the ever-present reality of hackers cracking accounts and selling them across the internetall this popularity has also led to the creation and perpetuation of various myths about multifactor authentication — what it iswhat its for — that can mislead developers and users alikegoing with multifactor authentication is almost always going to be an improvement over notbut its table stakes nowsophisticated attackers arent deterred by poorly configured multifactor authentication systemsto keep internal and user information secureyou need to know what youre doingmyth #1there are only a few different forms of mfaan authentication factor is a vector through which identity can be confirmed or deniedfrom fingerprint scanners to passwordsfrom usb sticks to pin codesthe commonly used term “multifactor authentication” simply refers to an authentication scheme that uses more than one of these methodsit was once true that most mfa systems operated basically the same waytodayhowevertheres a great variety of factors you can request from your usersfrom push notification receipt to sms to email to fingerprintrealitymfa is extremely customizablethere are three entire genres of factor—knowledgepossessionand inherenceknowledgesomething only a particular user knowssuch as a password or the answer to a secret questionsomething only a particular user hassuch as a usb stick or identifying badgeinherencesomeone only a particular user issuch as determined through a fingerprint scanner or gps recognition systemevery multifactor authentication system out there is built upon some combination of these three basic factorsa simple password/secret question system would be made up of two separate knowledge factorswhile one that asked you for an rsa hardware token in addition to your password would be made up of both knowledge and possession factorsa passwordless login system relies upon you being in possession ofand able to accessyour email inboxand so onthere are so many different forms of authentication out there now that you can freely choose whether you want something that maximizes ease of usesomething that maximizes security through obscurityor something in betweenmyth #2your use case for mfa doesnt matterrsa and totp are two of the most popular methods for generating secure codes that we have todaybut they are not interchangeable — both have their pros and consthe main technical difference between them is that rsa operates asymmetricallywith a public and a private keyand totp operates symmetricallywith a single private key shared between both partiesbut this isnt just a minor detail about how the two systems workthis single difference affects the appropriate context for each one and the trade-offs you will be makingyou have to choose your method based on your needsthe security of the public/private key pair of rsa “relies on the computational difficulty of factoring large integers”two extremely large prime numbers are generated using the rabin-miller primality test algorithmthe public key is generated from the modulus of those two prime numbers and a public exponentthe private key is generated from the modulus of those two prime numbers and an exponent calculated with the extended euclidean algorithmthis also means that computational power is required to encrypt and decrypt rsa when its used properlythat makes rsa slowerbut the benefit is that only one side of the transaction needs to actually possess the private keytotpon the other handoperates symmetricallya secret key is known to both the signer and the signee at the same timea hash function is used to blend the secret key with the time at the moment of authenticationrequiring fairly precise clock synchronizationand a one-time password is generated that is valid only for a short amount of timethis takes significantly less time and processing power than rsabut it does mean certain vulnerabilities become hypothetically possibleif a key was somehow compromised on the server-sidefor instancean employee of an organization could potentially impersonate a user to malicious endswith rsathe same employee would have to change the codebase to do such a thing—likely leaving a paper trailif you absolutely need your users to be able to download and store their private keys on their own systems—maybe youre working with financial data like credit cardshealth recordsor other forms of sensitive information—you may want to go with an asymmetric method like rsamyth #3all mfa solutions work basically the same waymass adoption of multifactor authentication is still a significant work in progressand its fair to say that most sites should simply focus on implementing it in some form or another—not which method is the absolute perfect onethat does notmean that every single form of multifactor authentication is equally secureeach occupies a different position on the axes of security and ease of usewhich means that each one will be optimal under a different set of circumstancesyou have to choose what you value mostsms is one of the older and more common forms of multifactor authentication that you see out thereyou log in to a websiteand then to double-check your identitya code is sent to a phone number that you have on fileyou receive a text on your mobile phone that contains a codeand then you enter that code into the website to verify yourselfsimplebut the us national institute of standards and technology has recently come out with a report saying they believe that sms multifactor is vulnerable to hijackingparticularly when used by subscribers to a voip phone service like google voicesms is also vulnerable to social engineering—in some instancesattackers have been able to simply call up a victims phone company andimpersonating their targetask that all text messages to that account be forwarded to a different onethese vulnerabilitiesplus the fact that other forms of authentication have become more user-friendlymean that many sites and apps enabled with multifactor authentication are moving on to different methodss the time-based one-time password algorithmor totpwhich is most notable used by apps like google authenticatora single-use password is generated from the combination of a secret key and the current timeand you enter that into the app asking for authentication rather than a code that could have been intercepted in transmissionbecause it often involves users manually copying a six-digit code from their phone to their computeris often considered to be a burden on userswith auth0 rulesthoughyou can get the benefit of mfa without that annoyance by setting up special conditions under which authentication will be requestedsay someone tries to call your banks customer support to reset your password — as they did to brian krebs — that event could be flagged as requiring a temporary extra authenticationyou could trigger the same kind of request in the event of a new email account being addedan address being changedand the likethis will keep your userspersonal information and account more secure without disrupting their usage of your productas they likely wont be performing these kinds of actions very oftenmyth #4mfa is always annoying for usersat many companies where multifactor authentication is tried but failsone of the most common complaints is that it makes logging in too much of a hassle for usersthey have to first enter in their username and passwordand then open up their email client or take out their phoneand then manually copy a codeor they lose the hardware key that they were given during onboardingor they misplace their mobile device and are no longer able to login to anythingthese companies get so many annoyed emails from their employeesor start noticing users forgetting their passwords and consequently churningthat they finally turn mfa offmfa can be as easy as tapping a push notificationmfa does not have to be troublesome for usersit doesnt have to require keeping track of a tokenmanually writing a codeor copying and pasting a code from a mobile device to your computerwith auth0 guardianyou can make logging in through multifactor authentication a simple matter of swiping and tapping a push notification from your phones lock screenits available for both ios and androidand can be enabled with a simple togglecheck out the full docs heremyths bustedtheres no doubt that implementing multifactor authentication is one of the best ways to improve the security of a website or an app that doesnt have itbut as with any decision regarding the privacy and security of your users and their informationnothing is as simple as it appears at first glanceat auth0we want to make sure that mfa is something you can implement knowing that it will protect your accounts without harming the user experiencewith guardianwere pushing that project forwardand were really excited to have you try it outcheck it out", "image" : "https://cdn.auth0.com/blog/mfa-myths/logo.png", "date" : "February 08, 2017" } , { "title" : "Migrating a PHP 5 App to PHP 7 (Rundown of PHP 7 Features) - Part 2", "description" : "Take a look at the PHP 7 features and learn how they can help you in migrating your PHP 5 projects.", "author_name" : "Prosper Otemuyiwa", "author_avatar" : "https://en.gravatar.com/avatar/1097492785caf9ffeebffeb624202d8f?s=200", "author_url" : "https://twitter.com/unicodeveloper", "tags" : "php5", "url" : "/migrating-a-php5-app-to-php7-part-two/", "keyword" : "tldrmany php applications are still running on php 5xnot ready to take full advantage of the awesome features that php 7 offersa lot of developers have not made the switch because of certain fears of compatibility issuesmigration challenges and the strange awkward feeling that migrating will take away a big chunk of their timein the first part of this tutorial we learned how to set up a php 7 development environmentthis timewell learn about all the new php 7 features and how you can leverage them when migrating your php 5 app to php 7php 7 featuresscalar type declarationwith php 5you could typehint a function parameter with classesinterfacescallable and array types onlyfor exampleif you want a parameter of a certain type string to be passed into a functionyou would have to do a check within the function like so// php 5function getbookno$number{ ifis_integer{ throw new exceptionplease ensure the value is a number} return $number}getbooknoboooksphp 7 eliminates the need for the extra checkwith php 7you can now typehint your function parameters with stringintfloatand bool// php 7function getbooknoint $number{ return $number// error raisedphp fatal erroruncaught typeerrorargument 1 passed to getbooknomust be of the type integerstring givencalled inphp 7 will throw a fatal error as seen above once you typehint with scalar valuesstrong type checkby defaultphp 5 and 7 allow for coercion when dealing with operations such as numeric stringsan example is thisfunction getbookno{ returnthis is it}echo getbookno8// resultthis is it8i passed in a string and it coerced it to an integer and allowed it to run successfullynow in php 7you can be strict and ensure no form of automatic conversion occurs by declaring a strict mode at the top of your php file like sodeclarestrict_types=1// resultphp fatal errorin php 5if you pass in a float valueit automatically strips out the decimal parts and leaves you with an integerif you pass in a float value tooit will throw a fatal errorwhen building a financial applicationthis feature comes in handyremember something like this in javascriptwhere you have to write usestrictat the top of your javascript filereturn type declarationphp 7 supports return types for functionsthis feature has been available in several strongly typed languages for a long timenowyou can easily enforce a function to return a certain type of data like sofunction dividevaluesint $firstnumberint $secondnumberint { $value = $firstnumber / $secondnumberreturn $value}echo dividevalues9// result0in the function abovewe want the return value to be an integerregardless of whatever the division turns out to benow the default weakcoercivetype checking in php comes to play again herethe value returned should be a float and it should throw a fatal type error but it is automatically coerced into an integerenable strict mode by placing declareat the top of the file and run it againit should throw a php fatal type error like sophp fatal errorreturn value of dividevaluesfloat returned inspaceship operatorphp 7 ships with a new operator<=>for simplifying the evaluation of arithmetic operationswith this operatorit is easier to evaluate less thanequal toor greater thanthe results will either be -10 or 1ruby and perl programmers are familiar with this operatorthis is how it worksif we have two operands $x and $yand we do $x <=>$ythenif $x is less than $ythe result will be -1if $x equals $ythe result will be 0if $x is greater than $ythe result will be 1function evaluate$x{ return $x <y}evaluate// result1good real world cases for this operator is in the simplification of comparison methods and using it for switch operations like so$data = [ [nameadocars2][tony4]ramirond3]woloski12]]function sortbycars{ return $x[] <$y[]}usort$datasortbycarsprint_r// resultarray[0] =>array[name] =>ado [cars] =>2[1] =>ramirond [cars] =>3[2] =>tony [cars] =>4[3] =>woloski [cars] =>12it sorted the array easily with less codewithout the spaceship operatori would have to write the sortbycars method like so$x[] == $y[{ return 0} return-11}array constantsbefore nowconstants defined with the definemethod can only accept scalar valuesin php 7you can have constant arrays using the definemethod like so// php 7definefinemercedesstrongvolkswagenuglychevroletecho cars[// resultmercedesgroup use declarationsgroup use declaration helps make the code shorter and simplerbefore nowif you are trying to use multiple classesfunctions and constants from the same namespaceyou have to write it like so// php 5namespace unicodeveloperemojiuse unicodeveloperexceptionsunknownmethodunknownemojiunknownunicodeunknownisnulluse function unicodevelopercheckforinvalidemojiuse const unicodeveloperinvalid_emojiclass emoji {}with php 7you can group them like so// php 7namespace unicodeveloper{ unknownmethodisnullfunction checkforinvalidemojiconst invalid_emoji }class emoji {}anonymous classesan anonymous class is essentially a local class without a nameanonymous classes offer the ability to spin up throwaway objectsthese objects have closure-like capabilitiesan anonymous class is defined like sonew class$constructor$args{}a real world case is a situation where you want to have objects that implement some interfaces on the flyrather than having several fileswhere you have to define the class and then instantiate ityou can leverage anonymous classes like so$meme = new class implements memeinterface { public function memeform$form{ return $form}}$app = new app$memeenhanced unicode supportin php 7all you need is the hexadecimal code appended touand youll have your symbol/emoji as an outputfunction getmoney{ echou{1f4b0}}getmoney// result💰the enhancements were made possible from the unicode codepoint escape syntax rfcyou can also get the name equivalent of the unicode charactersayvia the new intlchar class like soecho intlcharcharnameyou can get the character from the name like sovar_dumpintlcharcharfromnamelatin capital letter asnowmanturtlenotethe intlchar class contains about 600 constants and 59 static methodsthis was made possible from the intlchar rfcthe php manual has extensive documentation on intlchar classnull coalescing operatorthe purpose of this new operatoris to allow developers to set values from user inputs without having to check if the value has been setbefore php 7this is how you evaluate inputcheck this out$occupation = isset$_get[occupationbricklayerif the value of $_get[] existsit returns the value else it assigns bricklayer to the $occupation variableyou can simply shorten that line of code using theoperator like so// php 7$occupation = $_get[it automatically checks whether the value is set and assigns the value to $occupation variable if it iselse it returns bricklayerthe null coalescing operator also allows you to chain expressions like so// php 7$_env[] =software engineer$_env[// resultsoftware engineerthis will assign the first defined value to the $occupation variableclosure on callthere is now a better and more performant way of binding an object scope to a closure and calling ityou would bind an object to a closure like soclass nameregister { private $name =prosper}// closure$getname = function{ return $this->}$getthename = $getname->bindtonew nameregisternameregisterecho $getthenameyou now have a call method on the closure classso you can bind an object to a closure easily like so}$getname = function{ echo $this->$getname->callcheck out the php manualclosurecall for more informationexpectations and assertionsassertions are a debugging and development featurethe assertfunction in php 7 is now a language constructwhere the first parameter can also be an expression instead of just been a string or booleanthey have been optimized to have zero cost in productionyou can now enable or disable assertions from the php_ini file like sozendassertions = 1 // enable assertionzendassertions = 0 // disable assertion zendassertions = -1 //production modedont generate or execute codeassertions can now throw an exception when it failsyou can enable that from the ini file like soassertexceptions = 1 // throw exceptions// orassertexceptions = 0 // issue warningswhich has always been the casecan now take in two arguments where the second argument is aerror messageit can also be an instance of an exceptionan example is shown belowclass projectexception extends assertionerror {}public function checkauthenticityofproject{ /**/ assert$project instanceofunicodeveloperprojectnew projectexception$project was not a project object}notewith this new featureyou might not need to depend on assertion libraries anymore while developing and testing your codecheck out the expectations rfc for more informationerror handlingmany fatal and recoverable fatal errors have been converted to exceptions in php 7most errors are now reported by throwing error exceptionsthe exception class now implements a throwable interfacehierarchythrowable├──exceptionimplementsthrowable│ ├──logicexception│ │badfunctioncallexception│└──badmethodcallexception│ │──domainexception├──invalidargumentexceptionlengthexceptionoutofrangeexception│ │ │runtimeexception│ ├──outofboundsexception│ ├──overflowexception│ ├──rangeexception│ ├──underflowexception│ └──unexpectedvalueexception└──errorassertionerror ├──arithmeticerror ├──divisionbyzeroerror ├──parseerror └──typeerrorso you can catch specific errors like sotry { // evaluate something} catchparseerror $e{ // do something}earlier in this articlewe were evaluating scalar type hinting and php 7 threw typeerrorsrememberyesthats how cool php 7 is nowyou can catch multiple errors and exceptions in one catch block like sotry { // some code} catchexceptiontypeaexceptiontypebexceptiontypec $e{ // code to handle the exception} catchexception $e{ //}this is particular useful when one method throws different type of exceptions that you can handle the same waya new error_clear_lastmethod has been added to clear the most recent erroronce usedcalling error_get_lastwill be unable to retrieve the most recent errorscheck out the catching multiple exception types rfcinteger divisionphp 7 introduced a new function intdivwhich returns the result of an integer division operation as int// php 7$result = intdiv10// result2regular expressionshandling regular expressions just got easier in php 7a new preg_replace_callback_arrayfunction has been added to perform a regular expression search and replace using callbacks$message =haaaalaaaaaagirls and people of instagrantpreg_replace_callback_array~[a]+~ifunction$match{ echo strlen$match[0]matches forahave been found~[b]+~ibfound~[p]+~ip} ]$message// result4 matches forhave been found6 matches forhave been found1 matches forfound1 matches forfoundfiltered unserializethe unserializefunction has been existing since php 4it allows you to take a single serialized variable and convert back into a php valuethe options parameter has been addedyou can now whitelist classes that can be unserialized like so// converts all objects into __php_incomplete_class objectunserialize$objallowed_classesfalse]// converts all objects into __php_incomplete_class object except those of firstclass and secondclassunserializefirstclasssecondclass]]// default behavioursame as omitting the second argumentthat accepts all classesunserializetrue]it was introduced to enhance security when unserializing objects on untrusted datathe allowed_classes element of the options parameter is now strictly typedunserializereturns false if anything other than an array or boolean is givencryptographically secure pseudorandom number generatorcsrpngrandom_bytesand random_inthave been added to the csrpng functions in php 7returns a random string of a given lengthrandom_intreturns a random integer from a rangerandom_bytesrandom_int05000generator delegation and return expressionsgenerators were introduced in php 55prior to php 7if you tried to return anythingan error would be thrownyou can use a return statement within a generatoryou can get the returned value by calling the generatorgetreturnmethodlook at the code below$square = functionarray $number{ foreach$number as $num{ yield $num * $num} returndone calculating the squarewhat next$result = $square[15]foreach$result as $value{ echo $valuephp_eol}echo $result->// grab the return value// result1491625done calculating the squaregenerators can now delegate to another generator by using yield from like sofunction square} yield from additionfunction addition{ yield $num + $num} }foreachsquareas $value}// result1491625246810session_start config enhancementsthe session_startmethod now accepts an array of values that can override the session config in phpini filesessionlazy_write which is on by default can be turned off by explicitly stating it in the session_startsession_startlazy_writefalsecache_limiterprivateunpack objects withthelanguage construct now allows you to unpack objects implementing the arrayaccess interface$fruits = new arrayobjectbananamangoapple$a$b$c= $fruitsecho $aecho $becho $c// resultbananamangoapplenoteexpressions can no longer be completely emptyassigns the values starting with the right-most parameterstarts with the left-most parameterthis is true when working with arrays with indicesaccessing static valuesin php 5if you try to access a static value like soclass auth0 { static $lock =v10}echoauth0$lock// resultparse errorsyntax errorunexpectedt_paamayim_nekudotayimexpectingorinit throws no errorit simply works// php 7class auth0 { static $lock =foo// resultv10dirnameenhancementthe dirnamein php 5 returns a parent directorys pathan optional levels parameter has been added to the function to allow you as developer determine how many levels up you want to go when getting a path$path =/unicodeveloper/source/php-workspace/laravel/vavoomdirname$path// result/unicodeveloper/sourceuniform variable syntaxthis brings a much needed change to the way variable-variable expressions are constructedit allows for a number of new combinations of operators that were previously disallowedand so introduces new ways to achieve old operations in a more polished code// nesting$foo$bar$baz // access the property $baz of the $foo$bar property// nesting// invoke the return of foo// operators on expressions enclosed in{}// iife syntax from js // old meaning // new meaning$$foo[bar][baz] ${$foo[]}$$foo]$foo->$bar[] $foo->{$bar[$foo->reserved wordsphp 7 now allows globally reserved words such as newfor as propertyconstantand method names within classesand traitsclass car { private $type$who$costspublic function new$cartype{ $this->type = $cartypereturn $this} public function forwho = $who} public function costs$priceprice = $price} public function __tostring{ return $this->type$this->whoprice}}$car = new carecho $car->newmercedes benz->forwifecosts14000// resultmercedes benz wife 14000reflection api enhancementsphp 7 introduces two new reflection classesone is the reflectiongenerator class that reports information about generators and the other is the reflectiontype class that reports information about a functions return typereflectiontype apireflectiontypeallowsnull — checks if null is allowedreflectiontypeisbuiltin — checks if it is a built-in typereflectiontype__tostring - gets the parameter type namereflectiongenerator apireflectiongenerator__construct — constructs a reflectiongenerator objectreflectiongeneratorgetexecutingfile — gets the file name of the currently executing generatorreflectiongeneratorgetexecutinggenerator — gets the executing generator objectreflectiongeneratorgetexecutingline — gets the currently executing line of the generatorreflectiongeneratorgetfunction — gets the function name of the generatorreflectiongeneratorgetthis — gets the $this value of the generatorreflectiongeneratorgettrace — gets the trace of the executing generatortwo new methods have also been added to the reflectionparameter and reflectionfunctionabstract classesreflectionparameter apireflectionparameterhastype - checks if parameter has a typereflectionparametergettype - gets a parameters typereflectionfunctionabstract apireflectionfunctionabstracthasreturntype - checks if the function has a specified return typereflectionfunctionabstractgetreturntype — gets the specified return type of a functiondeprecated featuresusing deprecated features in php will trigger an e_deprecated errorphp 4 style constructors are deprecatedand will be removed in the futurean example of a php 4 style of writing constructorshaving the same name with the classis thisclass economy { function economy{ /**/ } }static calls to methods that are actually not static are deprecatedclass economy { function affordprimaryeducation{ echoi think i might not be able to afford it with this economy} } economyaffordprimaryeducation// result deprecatednon-static method economyshould not be called statically inthe salt option for the password_hashfunction has been deprecated to prevent developers from generating their own salts which are mostly insecurethe capture_session_meta ssl context option has been deprecatedstream_get_meta_datacan now be used to get ssl metadatathe ldap_sortfunction has been deprecatedthe alternative php tags shown below have been removedphp script tags <script language=php>/script>php asp tags <% %>backward incompatible changeshere are backward incompatible changes you should be aware ofset_exception_handleris no longer guaranteed to receive exception objectsinternal constructors always throw exceptions on failuresome internal classes would return null when the constructor failedthey will throw an exceptionerror handling for evalshould now include a catch block that can handle the parseerror objectthe almighty e_strict notices now have new behaviorsits no longer too strictcan no longer unpack string variablesstr_splitshould be used when performing this form of operationglobal can no longer accept variable variables unless you fake it by using the curly brace like so global ${$foo->bar}an e_warning will be emitted and null will be returned when internal functions try to perform float to integer automatic conversionsprefixing comments with # in phpini file is no longer allowedonly semi-colonsshould be useddividing by 0 will emit an e_warning and also one of either +inf-infor nan$http_raw_post_data was deprecated in php 560 and finally removed in php 7use php//input as a replacementswitch statements can no longer have multiple default blocksan e_compile_error will be triggered if you try to define more than one default blockfunctions can not have multiple parameters with the same namefunction slap$hand$strengthan e_compile_error will be triggered as a result of this functionstatic calls made to a non-static method with an incompatible context will now result in the called method having an undefined $this variable and a deprecation warning being issuedyou can check out the few other php core functions that have changedremoved extensions and sapisthe ext/mysqlext/mssqlereg and sybase_ct extensions have been removedall the mysql_ functions have been removedyou should either use the ext/mysqli extension or use the ext/pdo extension which is has an object-oriented apithe aolserverapacheapache_hooksapache2filtercaudiumcontinuityisapimilternsapiphttpdpi3webroxenthttpdtux and webjames sapis have been removedconclusionwe have successfully covered all the new features of php 7it might be overwhelming at first because it is a major version with a lot of new featuresand lots of deprecationsgoing over the rundown of all these features as highlighted in this article and using it as a handy reference will give you all the necessary information to migrate your php 5 apps to php 7thanks to the php manual and rfc documentsyou can always reference them for more informationin the next and final part of this seriesll convert a small php 5 app to php 7then measure and report the performance difference", "image" : "https://cdn.auth0.com/blog/migration/PHPlogo.png", "date" : "February 07, 2017" } , { "title" : "Customizing create-react-app: How to Make Your Own Template", "description" : "Create React App (CRA) is a very good tool for creating React apps from the CLI without build configuration.", "author_name" : "Prosper Otemuyiwa", "author_avatar" : "https://en.gravatar.com/avatar/1097492785caf9ffeebffeb624202d8f?s=200", "author_url" : "https://twitter.com/unicodeveloper", "tags" : "create-react-app", "url" : "/how-to-configure-create-react-app/", "keyword" : "tldrthere are several tools available for developers to aid the building of various types of websites and applicationsone such tool is create react appcrathe cli tool that helps javascript developers create react apps with no build configurationas awesome as cra isdevelopers still need a way of tweakingadding special scripts and modules that doesnt come bundled with cratodayill teach you how to createcreate-react-app scripts for you and your teammany developers already use create-react-app to build their react applicationsbut like i mentioned earlierdevelopers are still screaming for more configuration optionssome are interested in having support forpostcsscss moduleslesssasses7mobxserver renderingand a lot more out of the boxa lot of developersincluding javascript newbies create react apps from scratch dailyso the cra team at facebook built the create-react-app tool to make the process of creating such apps less tedious and error-proneas a developer that needs support for some of the technologies i highighted earlierone way of going about it is running npm run ejectthis command copies all the config files and dependencies right into your projectthen you can manually configure your app with all sorts of tools to satisfactionone major challenge developers might face with eject is not been able to enjoy the future features of craanother challenge with eject would be ineffecient synchronised setup across react developers working in teamone great way of solving this later challenge is publishing a fork of react-scripts for your teamthen all your developers can just run create-react-app my-app --scripts-version mycompany-react-scripts and have the same setup across boardlets learn how to accomplish thatcreate a forkopen up your github repo and fork the create-react-app repocreating a fork of create-react-appinside the packages directorythere is a folder called react-scriptsthe react-scripts folder contains scripts for buildingtesting and starting your appin factthis is where we can tweakconfigure and add new scripts and templatestweak the configurationclone the directory and open up the react-scripts/scripts/initjs in your code editors add some few console messages like soconsolelogchalkredvery importantcreate aenv file at the root of your project with react_app_employee_id and react_app_position_idyou can find these values in the company dashboard under application settingshttps//companybamboohrcom/settingsadd the important message during installation hereadded important message to show during installationnows change templatesopen up react-scripts/template/src/appjs and replace it with thisimport react{ component } fromreactimport logo from/logosvgimport/appcssclass app extends component { getenvvalues{ ifprocessenvreact_app_employee_idreact_app_position_id{ throw new errorplease define `react_app_employee_id` and `react_app_position_id` in yourenv file} const employeeid = processreact_app_employee_id const position = processreturn { employeeidposition }} render{ const { employeeidposition } = thisgetenvvaluesreturn<div classname=app>app-headerimg src={logo} classname=app-logoalt=logo/>h2>welcome to unicode labs</h2>/div>p classname=app-introb>employee id{ employeeid } </b>br/>position{ position } </p>}}export default appnowgo to react-scripts/template/public directoryopen the indexhtml file and change the value of the <title>tag to unicode labsyou can also change the favicon to your companys faviconyou can change as many things as you want and addcomponents that your team uses frequentlycreate anexample in the react-scripts/template directory that contains the followingreact_app_employee_id=44566react_app_position_id=engra user will have to rename it toenv once the create-react-app tool is done installing the react-scriptsyou should add this instruction to the readme filenotecra already includes support forenv variables if youre open to prefixing their names with react_appthats all we needpublish react-scripts to npmbefore publishing to npmwe need to change the value of the name key of the packagejson file in react-scripts directory to unicodelabs-react-scriptschange the value of the description key to unicodelabs configuration and scripts for create react appalsopoint the value of the repository key to the right locationin my caseit is unicodelabs/create-react-appcd to the react-scripts directory from your terminal like sochange into this directory on your terminalyou need to login to npm like solog into npmgo ahead and publishpublished unicodelabs-react-scripts to npmtest yourscripthead over to your terminal and runcreate-react-app test-app --scripts-version unicodelabs-react-scriptsin your own case it would be yourname-react-scriptswhere yourname is your company name or whatever name you choose to give itcra would install it and then you will see a notice like soimportant warningrememberwhen we put this message in the code earlierawesomecd into the test-app directoryrename theexample toenv and run npm start commandyour app will spin up with the new template like soif you have yarn installedthen create-react-app would install your app using yarnasideusing create-react-app with auth0authentication is a very key part of various applicationsauth0 helps you toadd authentication through more traditional username/password databasesadd support for linking different user accounts with the same usersupport for generating signed json web tokens to call your apis and flow the user identity securelyanalytics of howwhen and where users are logging inpull data from other sources and add it to the user profilethrough javascript rulesachieve ssosingle sign onseamlesslyauth0 has its own fork of react-scripts which means you can install an auth0-powered react app with a single command like socreate-react-app my-app --scripts-version auth0-react-scriptsonce it is done installinggo ahead andgrab your client id and auth0 domain from the auth0 dashboardenv file in the root of the my-app project and add client id and auth0 domain values to react_app_auth0_client_id and react_app_auth0_domain respectivelyrun the appwelcome screenlogin screenlogged inviolayou now have a fresh react app with full authentication powered by auth0 ready for usesign up for a free account today and enjoy fastseamlessand hassle-free authentication in your appsconclusiongreat programmers constantly sharpen their tools daily to increase productivitycra is a great tool for quickly building react applicationsin additionhaving your own customized fork of react-scripts helps you and your team easily add all the configurations you needyoull need to maintain your forkand make sure it is synced with the upstream to have all updatesbackstroke is a bot that can help you with thishave a very productive time hacking away", "image" : "https://cdn.auth0.com/blog/optimizing-react/logo.png", "date" : "February 06, 2017" } , { "title" : "How Enterprise Federation Helps Shorten The Sales Cycle", "description" : "Optimizing your login for enterprise customers can help you save time and close deals.", "author_name" : "Martin Gontovnikas", "author_avatar" : "https://www.gravatar.com/avatar/df6c864847fba9687d962cb80b482764??s=60", "author_url" : "http://twitter.com/mgonto", "tags" : "federation", "url" : "/how-enterprise-federation-helps-shorten-the-sales-cycle/", "keyword" : "enterprise customers are greattheyre largestable accounts that can represent a real win for your businessbut theyre also a unique challengeand more complicated than small to medium-sized businessessmbsselling to enterprise clients has always meant managinglong sales cyclescustom solutionsspecific needsbut in 2017in the aftermath of some of the most prominent hacks of all timeit also means keeping up with enterprise companies that are getting extra serious about making sure that their security practices are airtightimplementing auth0with support for 10+ types of enterprise iam federationis the best way to shorten your sales cycle with these companies and make sure you dont have to do frantic work on the back-end to make deals happenhow selling to enterprise companies usually worksyouve built a product that youve been selling to smbsbutfinallyyoure starting to get a foothold with enterprise-level customersyour hard work is paying offand you line up a deal with coca-colathings are really moving for your businessand it feels amazingas it turns outcoca-cola uses active directory for their iamso if you want coca-cola employees using your productyou need to implement active directoryof courseyou get right on it to close the dealbut it takes three monthswhich stretches your sales cycle out much longer than it could beafter your deal with coca-cola closesre feeling good about your ability to serve the enterprise customerso you go to disney and you try to sell to disneybad news comes almost immediatelydisney uses samlnow you have to go through the whole process all over againextending your sales cycle because of compatibility issues is just time and money being constantly thrown awayon top of thatit doesnt present your business in the best lightyour internal champion at your enterprise customer could leave or be fired in those three monthsopportunities could be lost elsewhere because you had to spend time building this integrationyou have to spend even more money on developers making those integrations happenthis is not a cycle you want to be stuck inand many of your customers are going to have these kinds of requirements if you want to sell to thembased on data weve collected about enterprise identity managementwe learned that hundreds of thousands of companies rely on active directory alonemost53%enterprise connections are through active directory federation servicesadfsanother 35% are using active directory connectionssamlp7%microsoft azure active directory3%and google apps2%round out the top 5ve got to be able to handle these requirements to get your product in the door with enterprise customerssafari books case studya great example of how auth0 can be easily integrated into to your app to open up enterprise possibilities is through safari books onlinesafari books online was facing a predictable pattern of long sales cycles and new development time when they first started to go upmarket and sell their design resourcesvideosand e-books services to the enterprise customerat firstthey tried to implement all the enterprise demands they were receiving in-housebut realized they couldnt upgrade their login quickly enoughunderstanding what they stood to losethey sought out an iamsafari books online chose auth0s secure enterprise login options to upgrade their platform and implement logins across enterprise requirementsthis dramatically shortened their sales cycle and gave them the functionality they desperately needed“compared to the costs and resources required to buildhostand secure asolutionthe investment associated with a third-party authentication service like auth0 was a sensible choice” said safari engineering manager cris concepciontodaysafari books online is used by a wide range of enterprise clientsfrom google to teslahow enterprise sales work with auth0when you use auth0you immediately cut down on the time that your team has to spend doing heavy lifting on the compatibility frontauth0 allows you to make your app work with 10+ different enterprise identity providersout of the boxauth0 acts as an intermediary between your app and your usersit seamlessly integrates into your productbut keeps your app isolated from any changes to or idiosyncrasies of different implementationsthis means you can keep maximum function for your app with minimal effort for loginusing auth0 to ramp up your security and login doesnt just shorten your sales cycle for one customerbut also gives you the tools to fit your login to a wide variety of enterprise needsif you configure compatibility between auth0 and your app onceve configured compatibility for 10+ identity providers in one fell swoopauth0 also gives you the power to add other critical features that enterprise customers wantlike multi-factor authentication and single sign-onthe power of the enterprise customerenterprise contracts can represent a big win for your business and are one of the best ways to maximize your revenuethe enterprise customer wants great solutions to their problemsand its likely that a well-crafted product that works for smbs will scale extremely well to an enterprise ventureadd a few extra perkslike a designated point-person and a higher user limitand youve paved the path to scale upthe key is to couple this with enterprise-ready security and identity managementif enterprise customers want employees to be able to sign in with their existing logins instead of creating more accountsyou can use auth0s single sign-on to streamline your products integrationif they want to hone in on securityyou can offer multifactor authentication that requires both a login and an additional security precautionlike a fingerprint scanor a verification codewhatever their needa good iam integration will help you cater to itthis will allow you to take that premium plan and charge a premium pricebetter yetwhen you use an iamthat “premium plan” can be ready as soon as you decide to go after enterprise clientswhen you offer enterprise login and security compatibilityyou instantly open the door to a whole new — and efficient — revenue stream", "image" : "https://cdn.auth0.com/blog/ga/budgetlogo.png", "date" : "February 03, 2017" } , { "title" : "Migrating a PHP 5 App to PHP 7 (Development Environment) - Part 1", "description" : "Learn how to migrate a PHP 5 application to PHP 7: Setup and development environment.", "author_name" : "Prosper Otemuyiwa", "author_avatar" : "https://en.gravatar.com/avatar/1097492785caf9ffeebffeb624202d8f?s=200", "author_url" : "http://twitter.com/unicodeveloper?lang=en", "tags" : "php5", "url" : "/migrating-a-php5-app-to-php7-part-one/", "keyword" : "tldrmany php applications are still running on php 5xnot ready to take full advantage of the awesome features that php 7 offersa lot of developers have not made the switch because of certain fears of compatibility issuesmigration challenges and the strange awkward feeling that migrating will take away a big chunk of their timein this tutorialyoull learn how to upgrade your php 5 application to php 7 starting from upgrading your development environmentphp 5 and php 7php 5 has been around for a very long timeover 10 years nowin factmany production php apps are currently running on either php 5253 or 56php 5 brought a lot of awesome features to php such asrobust support for object oriented programmingstandard php librarysplclosuresnamespacesmagical methods for metaprogrammingmysqli - improved mysql extensioncleaner error handlingbetter support for xml extensionsunfortunatelyevery thing that has a beginning must have an endphp 56 active support ended january 192017it will receive security support until december 312018php 5 and 7 release and support durationphp 70 was officially released on december 32015 with a lot of new features and better performance benefitsit is twice as fast as php 5a summary of the new features are highlighted belowreturn and scalar type declarationsbetter unicode supportnull coalescing operatorfatal errors conversion to exceptionsgenerator enhancementanonymous classessecure random number generatorremoval of deprecated featuresand much moreif you arent using any of the deprecated features in your php 5 appthen the transition to php 7 will be seamlessin the next postill give a very detailed rundown of php 7 featuresincluding the deprecated featuresupgrading your development environment to php 7the first step to upgrading your application to use php 7 features is to migrate your development environment from php 5x to php 7we will cover how to upgrade your development environment to run php 7x on ubuntucentoswindows and mac os machinesmac os xif you are a fan of homebrewyou can install php 70 via homebrew like sobrew tap homebrew/dupesbrew tap homebrew/versionsbrew tap homebrew/homebrew-phpbrew unlink php56brew install php70if you were using php 5then you should unlink the old php by running brew unlink php56 else unlink whatever version is present before you go ahead to install php 70another option is to install it via curl on your terminal like socurl -s https//php-osxliipch/installshbash -s 70windowsif you are fan of wamp or xamppthen you can just download the latest versions of the softwareit comes packaged with php 7download and install the last/latest versionanother option is to download the php 70 distribution for windows from http//windowsphpnet/download#php-7ubuntuif you are running ubuntu on your machineespecially around v14 and 150 by running these commandssudo apt-get updatesudo add-apt-repository ppaondrej/phpsudo apt-get install -y php70-fpm php70-cli php70-curl php70-gd php70-intl php70-mysqlnoteyou can check out how to install php 7 and nginx hereand manually build memcached module for php 7debianif you are running debian on your machineespecially around v6v7 and v80 by doing the followingopen up your /etc/apt/sourcesfileand make sure you have these commands belowif you are using a jessie distributiondeb http//packagesdotdeborg jessie alldeb-src httporg jessie allif you are using a wheezy distributiondeb httporg wheezy alldeb-src httporg wheezy allfetch and install the gnupg keywget https//wwworg/dotdebgpgsudo apt-key add dotdebgpginstall php 70sudo apt-get updatesudo apt-get install php70centos / red hat enterprise linuxif you are running centos or red hat enterprise linux operating system on your machine0 by running the following commands on your terminal like sosudo yum updaterpm -uvh https//dlfedoraprojectorg/pub/epel/epel-release-latest-7noarchrpmrpm -uvh https//mirrorwebtaticcom/yum/el7/webtatic-releaserpmsudo yum install php70wsudo yum install php70w-mysqlwhen you are donerun this command php -vyou should see something like thisphp 7clibuiltdec 2 2015 204232ntscopyrightc1997-2015 the php groupzend engine v31998-2015 zend technologiesphpbrewphpbrew is a tool that you can use to build and install multiple versions of php on your machineit canbuild php with different variants like pdomysqlsqlitedebug etccompile apache php module and separate them by different versionsswitch versions very easily and is integrated with bash/zsh shellinstall &enable php extensions into current environment with easeinstall multiple php into system-wide environmentdetect path for homebrew and macportsphpbrewyou can install it on your machine like socurl -l -o https//githubcom/phpbrew/phpbrew/raw/master/phpbrewchmod +x phpbrewthen you can install it into your bin folder like sosudo mv phpbrew /usr/local/bin/phpbrewnotemake sure you have /usr/local/bin in your $path environment variableyou can install php 7 by running the following commandsphpbrew self-updatephpbrew install next as php-710phpbrew use php-70you can use phpbrew to install php 70 from github like sophpbrew install githubphp/php-src@php-70 as php-70most timeswe use php with other extensions such as mysqlpdoopenssl etcyou can use phpbrew to build your php environment with various variants like sophpbrew install 70 +mysql+mcrypt+openssl+debug+sqlitethis command above will build php with mysqlmycryptopenssldebug and sqlitevagrantvagrant provides a simpleelegant way to manage and provision virtual machinesthe development environments that run on vagrant are packaged via vagrant boxesvagrant boxes are completely disposableif something goes wrongyou can destroy and re-create the box in minutesone of such boxes i recommend is laravel homesteadnoteyou can check out these awesome free courses on learning how to use vagrant on https//serversforhackerscomlaravel homesteadlaravel homestead is an officialpre-packaged vagrant box that provides you a wonderful development environment without requiring you to install phpa web serverand any other server software on your local machinehomestead runs on any windowsmacor linux systemit includes the followingubuntu 1604gitphp 7latest version of phpnginxmysqlmariadbsqlite3postgrescomposernodewith yarnpm2bowergruntand gulpredismemcachedbeanstalkdinstall virtualbox 5or vmwareand vagrantnow that you have vagrant and virtualbox or vmware installedgo ahead and download the laravel homestead box like sovagrant box add laravel/homesteadfollow the instructions on the laravel homestead documentation to find out more about the installation processi recommend windows users to take a stab at using laragonit provides an alternative but suitable and powerful environment like laravel homesteadphp7devanother vagrant image is php7dev by rasmus ledorfcreator of phpit is a debian 8 vagrant image which is preconfigured for testing php apps and developing extensions across many versions of phpyou can gloriously switch between php versions by using the newphp commandfollow the instructions on the readme to find out how to installconfigure and usevaletvalet is a php development environment for mac minimalistsit was built by taylor and adam wathan of the laravel communityit is a fast blazing development environment that uses roughly 7mb of ramit requires homebrewlaravel valet configures mac to use phps built-in web server in the background when your machine startswith valetif you create a project folder called auth0-phpthen you can just open auth0-phpdev in your browser and it will serve the contents of the folder automaticallyyou can share whatever you are working on locally with someone in another part of the world by just running this commandvalet sharevalet uses ngrok under the hood to shareyou can even serve a local site over encrypted tls using http/2 by invoking a command like sovalet secure blogwhere blog is the name of the site or project foldervalet generates a fresh local tls certificateinvoke the secure commandsite is served over https locallyvery awesomeout of the boxvalet supports laravellumensymfonyzendcakephp 3wordpressbedrockcraftstatamic and jigsawhoweveryou can extend valet with your owndriversfollow the instructions on the laravel valet documentation to find out how to install and get started using itdockerphp7-dockerizedphp7-dockerized is a simple php 7 docker and compose environment that is bundled with nginx and mysqlfollow the instructions on setting up a local php 7 development environment with docker and composelaradocklaradock is a docker php development environment that gives you a wonderful development environment without requiring you to install php 7nginxredisand any other software on your machinesclone laradock inside your project like sogit clone httpscom/laradock/laradockgitenter the laradock folder and run this commanddocker-compose up -d nginx mysql redis beanstalkdopen yourenv file and set the followingdb_host=mysqlredis_host=redisqueue_host=beanstalkdfollow the instructions on the laradock documentation to find out how to install and configure itphpdockerphpdockerio is a php and docker generated environmentit supports php 7 up until 71 betafollow the instructions to set it up like soclone httpscom/phpdocker-io/phpdockeriocopy app/config/parametersymldist into app/config/parametersymlrun composer installrun bower installrun php bin/console assetsinstall --symlink --relativerun docker-compose up -ddont hesitate to submit an issue on the phpdocker-io repo if you hit a roadblockchris fidao has a fantastic course on dockerwith his course on shippingdockercomll learn how to use docker in developmenttesting and productionthere are different ways of setting up a php 7 development environmentthe few i have mentioned here should give you a lot of options in getting your machine ready to effectively test php 7 featuresconclusionwe have successfully covered various ways of setting up a php 7 development environmentthe first step to migrating an app from a specific language version to another is ensuring that the development environment supports the new versiondo you have other ways of setting up php 7 development environmentsare you currently using an awesome tool to run your php 7 appsplease let me know in the comments sectionin the next articlewell go through all the features of php 7 that you can leverage when migrating your php 5 application", "image" : "https://cdn.auth0.com/blog/migration/PHPlogo.png", "date" : "February 02, 2017" } , { "title" : "Better User Management with the Delegated Administration Dashboard", "description" : "Learn how to use the Delegated Administration Dashboard extension to expose the users dashboard for a select group of users and build a powerful user management workflow.", "author_name" : "Ado Kukic", "author_avatar" : "https://s.gravatar.com/avatar/99c4080f412ccf46b9b564db7f482907?s=200", "author_url" : "https://twitter.com/kukicado", "tags" : "extension", "url" : "/delegated-admin-v2/", "keyword" : "tldr the delegated administration dashboard extension exposes the users tab of the auth0 dashboard allowing you to easily and securely grant limited access for privileged user accountsthe extension exposes a number of hooks allowing you to provide a customized and fine-grained experiencetodaywe will look at how the delegated admin extension can help a growing organization simplify their user management workflow by giving subordinate accounts access to createeditand further manage various user accounts throughout the organizationbuilding modern applications is only half the battleas your app growsthe need for excellent management and maintenance tools becomes keyuser management is one area where you dont want to get this wrongif you are using auth0 for managing modern identity in your applications and are at a point where you need more control over user managementthen i would like to introduce you to our delegated administration dashboard extensionthe delegated administration dashboard or delegated admin extension allows you to give fine-grained access to user data stored and accessed through auth0with this extensionyou can give individual users access to viewmanageand edit users in your appswithout giving them the proverbial keys to the kingdom akafull access to the auth0 dashboardthe delegated admin dashboard allows companies to build and enforce powerful user management workflowstweet this todaywe will look at how you can utilize the delegated admin extension to expose only the user dashboard to a set of privileged userswe will be doing this in the context of a fictional company that has grown tremendously and needs a better way to delegate user management accessour example is a common one but there are many use cases where this extension can be appliedfor examplea saas platform may want to give their clients an easy to use dashboard to manage their tenantsanother example could be an organization wishing to grant specific access to various departmentsit support would be able to viewand delete all organizational accountswhile customer support would only have access to customerswell try to address various use cases throughout the post to show the versatility of the extensionlets get startedcloudcakes goes globalcloudcakes is a fictional company that delivers on-demand cakesusers simply place an order through the companys web or mobile app and within 30 minutes a cake is deliveredthe company has gone globalamassing millions of users and opening many franchises along the wayeach franchise operates independently and serves a designated local marketthe company has scaled operations in many waysbut has never really had a solid user management strategythey recently switched to auth0 for managing their usersbut now need a way to allow individual franchises to have more control over their usersdelegated administration extensionthe delegated administration extension will allow cloudcakes to better delegate access to their vast pool of usersas it stands only the executives from cloudcakes corporate can access the auth0 dashboardgiving access to the dashboard for all franchise owners is not an optionthey could use the auth0 management api to build an experience for the franchise ownersor they could use the delegated administration extension to expose only the users section of the dashboardthe latter seems like a much better options see how cloudcakes and your organization can accomplish this quickly and easilyto use the delegated administration dashboard you will need to have an active auth0 accountif you dont already have oneyou can sign up for freecreate a new clientthe first thing we are going to do is create a new auth0 client to house the user accounts that will have access to the users dashboardthis client will essentially act as the command center for the users dashboardto create the clientnavigate to the auth0 dashboard and click on the new client buttonyou can name the client whatever you wantll just name ours cloudcakes incset the type of app as single page app* and click create**with the new client createdgo ahead and copy its client idnavigate to the bottom of the settings tab in this newly created client and click on the show advanced settings linkfrom herenavigate to the oauth tab and in the allowed apps/apis section paste in the client idadditionally in this sectionchange the jsonwebtoken signature algorithm to rs256finallyscroll up to the allowed callback urls section and here we will add the url that will be used to access the users dashboardthe url will follow this structure https//your-auth0-usernamelocalewebtaskio/auth0-delegated-admin/loginso since i am in the us and my username is adobot the url i will add is https//adobotussave your changes and navigate to the extensions tab in the main menusetup new database connectionin addition to setting up a new client for our users dashboardll also want to setup a new database connection to store our privileged userswe could use an existing data store if we really wanted tobut its more secure to isolate these users in their own databaseas always you can either store the users with auth0 or connect to anydatastoreto create the database connectionhead over to the database connections in the auth0 dashboard and select create db connectionname your connectionselect how users will loginand it is recommended you disable sign-ups so that users dont have the option to directly sign up for an accountonce the connection is created go back to the client you are going to use for the users dashboard and enable just this newly created connection for itthis will ensure that only users that are stored in this database can login and access the users dashboardenabling the delegated admin extensionto enable the delegation admin dashboard extensionyou will just need the client id you copied earlierfrom the extensions sectionnavigate to the very bottom where you will find the the delegation admin dashboard extensionclick on itand a modal dialog will pop up asking you to input some data before enabling the extensionthe two fields you will need to provide data for are the extension_client_id and titleextension client id will be the client id you copied earlier and the title can be anythingyou can also optionally add a link to a css file to customize the look and feel of the users dashboardbut well omit that hereclick install to enable the extensionpopulating the databaseweve enabled the delegation admin dashboard extension but its of little use to us now since we dont have any users capable of accessing its change thatnavigate to the users tab in the auth0 dashboard and create a new userbe sure place this user in the correct databasewith the user createds go ahead and login to see the users dashboardnavigate to your dashboard urlwhich again follows the httpsio/auth0-delegated-admin/login patternattempting to login with the newly created user will give you access to the users dashboard but you will not be able to view or do anything as the system does not know what permissions this user hasll need to go edit the users metadata and let the system know what type of user is logging in and what they should be able to dos do that nowunderstanding user rolesthe delegation administration dashboard supports two unique user rolesadministrator and userthe user role allows the logged in account to searchcreateand execute other management roles on user accounts they have access towhile the administrator role additionally has access to logs as well as ability to configure hooks and other settings for users of the users dashboardto grant one of these roles to our userll need to edit the app_metadata for the user and add a role attributes give our newly created user the administrator rolego to their account in the auth0 dashboardclick edit in the metadata sectionand for app metadata add the following code{rolesdelegated admin - administrator}save this change and go back to the users dashboardrefresh the page and now the user will have full access to the users dashboard and will see all of the existing users across all connectionslogsand will have the ablility to configure the users dashboardso far so goodif you go back and change the role todelegated admin - userand refresh the pageyou will just be able to see theof usersbut not logs and you will not have the ability to make configuration changes to the users dashboard with the accountyou may have noticed that regardless of the role you gave to your userthey were able to see all of the users across all of your auth0 connectionsin many cases you would not want this to happeninstead youd want to have fine-grained control over which connections an account has access toll address that in the next sectionfine-grained control with hooksthe user roles in the previous section give us some controlbut in many instances we would want further controlwe can accomplish this with additional properties stored in the users app_metadatacloudcakes has separated its operations into various departmentsll add a department attribute in our app_metadata to store which department a user belongs tos edit our current and only user and make them part of the executive departmentsimply edit their app_metadata data to readdepartmentexecutive}the department field can be set to any string value and you can have as many departments as you see fithooks will give us additional functionality and combined with our metadata will allow us to grant fine-grained control over each user accountif you are familiar with how rules work with the auth0 platformyoull feel right at homeessentiallyhookslike rulesrun whenever an action triggers them such as creating a new user account or viewing a pagehooks are written in the pattern offunctionctxcallback{ /* perform any type of logic */ return callbacknull{}// the second paramter adds any data that should be passed with the callback}lets see how we can extend the capabilities of the users dashboard with hooksto access hooksyou will need to be logged in with a user with the role of delegated admin - administratorhead over to the delegated admin dashboard and loginonce inclick on your users email in the top right corner and a dropdown menu will openfrom here click on the configuration tab and you will see the configuration page where we will add our hooksfilter hooksfilter hooks allow us to control theof users that can be accessed by the logged in accountthis hook is called whenever the user lands on the users dashboardfor our cloudcakes examples assume that a manager of each cloudcakes franchise can only see the users that have done business with their specific franchisein the auth0 dashboardeach franchise has its owndatabase connectioncloudcakes-franchise-2479 is a database connection containing the users that signed up with cloudcakes store 2479ll also assume that the department of each manager is franchise ownerll assume that each franchise owner has an additional app_metadata value for the franchise that they owns say franchise_owned is the fielda sample app_metadata therefore may befranchise ownerfranchise_owned2479}what we want to do is only display users that belong to cloudcakes store 2479s see how we can accomplish this with a filter hook{ // get the department from the current users metadatavar department = ctxrequestuserapp_metadata &&app_metadatavar franchise = ctx// if the user does not belong to a department well throw an erroriflength{ return callbacknew errorthe current user is not part of any department} // the executive department can see all usersdepartment ===} else if{ // more details on syntax of query available herehttps//auth0com/docs/api/management/v2/user-search return callbackidentitiesconnectioncloudcakes-franchise-+ franchise +} return callback}this filter hook will display all users for all connections if the department is an executivebut will only display users belonging to a specific franchise if the logged in account belongs to the franchise owner departmentfor the filteryou can input any filter query in lucene syntaxso lets say you wanted to enhance the filter results to also return theof other franchise owners as wellwriting this queryreturn callbackor app_metadatawill return theof users belonging to a franchise but also all of the other franchise owners in the systemthis could be useful if you wanted togive managers to ability to view contact details of all the other franchise ownersaccess hookaccess hooks determine what actions a logged in account is allowed to do with the user accounts they can viewcertain accounts may only be able to read datawhile others may be able to edit or delete user accountsin the cloudcakes exampled want the logged in user to be able to only read the data of other franchise ownersbut they should be allowed to do everything with users belonging to their connections see how we would implement this functionality} if{ // check to see if the user and the account accessed share a franchise store ifpayload== franchise{ // if they do notonly allow the user to read data ifaction ===read{ return callback} else { // otherwise return an error return callbackyou can only read user data} } else { // if the franchise store is the samethen we can assume that the account is a customer account of the franchise return callback} } return callback}as you can see with just these two hooks in placewe can create fine-grained access control for our organizationwe can query off of any data we have on our users and decide what actions the logged in account can takethe following are supported actions that a logged in users dashboard account can takeuserdeleteuserresetpasswordchangeusernamechangeemailreaddevicesreadlogsremovemultifactor-providerblockuserunblockusersendverification-emailcreate hookthe next hook i want to talk about is the create hookthe code here is executed when a new user is created from the users dashboardusing the account app_metadata againwe can further fine-tune the experiencefor cloudcakeswe want to enforce that only members of the executive department can create franchise ownerss see how we would accomplish this// get the department from the current uservar currentdepartment = ctxcurrentdepartment} // check to see if an account being created will belong to the `franchise owner` department ifcurrentdepartment ==memberships[0] =={ // if it willbut the account attempting to create it is not an executivethen the call will fail return callbackyou can only create users within your own department} else { // otherwise create the account return callback{ emailemailpassword{ departmentmemberships[0] } }}you may be asking yourself how do you assign membership rolesif you are planning on using the users dashboard to create users that will belong to departmentsthen you can use the membership hookmembership hookthe membership hook will allow you to add a field in the create user ui that will allow an account to assign a membership to the account being createdin cloudcakes case we will assign the membership to the department in our app_metadatathe simplest way to do this is as follows{ return callback{ memberships[it] }}now when the logged in account goes to create a new user from the users dashboardthey will be able to assign a department immediatley to the new usersettings querythere is one final hook that we can implement in the users dashboardthe settings querythis will allow us to customize the look and feel of users dashboard experiencewe can edit such settings as which connections to display when creating a new userwhich css stylesheet to loador even change the wording of the different sectionswe want to make sure that when a franchise owner goes to create a new userthe user is automatically created in their database connectionwe wont even give them the option of seeing all the other connectionss see how we would implement this{ var department = ctx{ // only these connections should be visible in the connections picker// if only one connection is availablethe connections picker will not be shown in the uiconnections[ ctxconnection ]}}for more information on fields that you can set via the settings query check out the docsputting it all togethercombining auth0 with the delegated administration dashboard allowed cloudcakes to give each one of their franchise owners centralized access to their users in a safe and secure wayfranchise owners could login and manage their userbase while cloudcakes corporate still owned all of the data under a single umbrellaextensibility through hooks made users dashboard useful for both the owners of cloudcakes and its franchise partnersconclusionthe delegated administration dashboard extension is a great tool for giving limited access to only the users dashboard of auth0it allows organizations to enforce fine-grained permissions for users accessing the dashboard and removes the need for giving full-access to the auth0 dashboardhooks allow the users dashboard to meet the needs of any user management workflow whether its limiting access or enforcing specific criteriaif your organization needs a better way to manage your auth0 usersgive the delegated admin extension a try todayt already have an auth0 accountsign up for free to get startedthe delegated admin extension makes is a great tool for giving limited access to the auth0 dashboardtweet this", "image" : "https://cdn.auth0.com/blog/delegated-admin-cloud-cakes/hero.png", "date" : "February 01, 2017" } , { "title" : "Beating JSON performance with Protobuf", "description" : "Protobuf, the binary format crafted by Google, surpasses JSON performance even on JavaScript environments like Node.js/V8 and web browsers.", "author_name" : "Bruno Krebs", "author_avatar" : "https://www.gravatar.com/avatar/76ea40cbf67675babe924eecf167b9b8?s=60", "author_url" : "https://twitter.com/brunoskrebs", "tags" : "protobuf", "url" : "/beating-json-performance-with-protobuf/", "keyword" : "tldrprotocol buffersor protobufis a binary format created by google to serialize data between different servicesgoogle made this protocol open source and now it provides supportout of the boxto the most common languageslike javascriptjavac#ruby and othersin our testsit was demonstrated that this protocol performed up to 6 times faster than jsonwhat is protobufprotocol buffersusually referred as protobufis a protocol developed by google to allow serialization and deserialization of structured datagoogle developed it with the goal to provide a better waycompared to xmlto make systems communicateso they focused on making it simplersmallerfaster and more maintainable then xmlbutas you will see in this articlethis protocol even surpassed json with better performancebetter maintainability and smaller sizehow does it differs from jsonit is important to note thatalthough json and protobuf messages can be used interchangeablythese technologies were designed with different goalsjsonwhich stands for javascript object notationis simply a message format that arose from a subset of the javascript programming languagejson messages are exchanged in text format andnowadaysthey are completely independent and supported byvirtuallyall programming languagesprotobufon the other handis more than a message formatit is also a set of rules and tools to define and exchange these messagesgooglethe creator of this protocolhas made it open source and provides tools to generate code for the most used programming languages aroundphprubyobjective cpythonc++ and gobesides thatprotobuf has more data types than jsonlike enumerates and methodsand is also heavily used on rpcsremote procedure callsis protobuf really faster than jsonthere are several resources online that show that protobuf performs better than jsonxml and etc - like this one and this one -but it is always important to check if this is the case for your own needs and use casehereat auth0i have developed a simple spring boot application to test a few scenarios and measure how json and protobuf performedmostly i have tested serialization of both protocols to make two java applications communicate and to make a javascript web application communicate to this backendthe main reason to create these two scenarios - java to java and javascript to java - was to measure how this protocol would behave in an enterprise environment like java and also on an environment where json is the native message formatthat iswhat i show here is data from an environment where json is built in and should perform extremely fastjavascript enginesand from an environment where json is not a first class citizenthe short answer to the question is yesprotobuf is faster than jsonbut this answer is not useful nor interesting without the data that i gathered on my experimentslets take a look at the details nowtest sampleto support the measurementsi have created three protobuf messagesaddressto hold just the street and numberpersonto hold the namea collection of addressesa collection of mobile numbersand a collection of email addressespeopleto hold a collection of person messagesthese messages were assembled together in an application with four restful endpointsone that accepted get requests and returned aof 50 thousand people in protobuf formatanother one that accepted get requests and returned the sameof 50 thousand peoplebut in json formata third one that accepted post requests with any number of people in protobuf formata fourth one that accepted post requests with any number of people in json formatjavascript to java communicationsince there are a lot of javascript engines availableit is valuable to see how the most popular of them behave with this set of dataso i decided to use the following browserschromeas this is the most popular browser around and its javascript engine is also used by nodejsfirefoxas this is another very popular browserand safarias this is the default browser on macbooks and iphonesthe following charts exposes the average performanceof these browserson 50 subsequent get requests to both endpoints - the protobuf and json endpointsthese 50 requests per endpoint were issued twicefirst when running the spring boot application with compression turned onand then when running the application with compression turned offsoin the endeach browser requested 200 times all these 50 thousand people dataas you can see in the charts abovethe results for the compressed environment were quite similar for both protobuf and jsonprotobuf messages were 9% smaller than json messages and they took only 4% less time to be available to the javascript codethis can sound like nothingbut considering that protobuf has to be converted from binary to json - javascript code uses json as its object literal format - it is amazing that protobuf managed to be faster than its counterpartnowwhen we have to deal with non-compressed messagesthe results change quite a bits analyze the charts belowon these situationsprotobuf performs even better when compared to jsonmessageson this formatwere 34% smallerand they took 21% less time to be available to the javascript codewhen issuing post requeststhe difference gets almost imperceptible as usually this kind of request doesnt deal with heavy messagesmore frequent than notthese requests just handle the update of a few fields on a form or something similarto make the test trustworthyi issued 50 requests with just one person message and a few propertieslike emails addresses and mobileson itthe results can be checked belowin this case the messages sizes were not even differentmainly because they were so small that the meta-data about them were heavier than the data itselfand the time to issue the request and get a response back was almost equal as wellwith only a 4% better performance from protobuf requests when compared to json requestsjava to java communicationif we were to use only javascript environmentslike nodejs applications and web browsers as interfacesi would think twice before investing time on learning and migrating endpoints to protobufwhen we start adding other platformslike javaandroidetcthen we start to see real gains on using protobufthe chart below was generated with the average performance of 500 get requests issued by one spring boot application to another spring boot applicationboth applications were deployed on different virtual machines hosted by digital oceani chose this strategy to simulate a common scenario where two microservices are communicating through the wires see how this simulation rannow this is a great performance improvementwhen using protobuf on a non-compressed environmentthe requests took 78% less time than the json requeststhis shows that the binary format performed almost 5 times faster than the text formatandwhen issuing these requests on a compressed environmentthe difference was even biggerprotobuf performed 6 times fastertaking only 25ms to handle requests that took 150ms on a json formatas you can seewhen we have environments that json is not a native part ofthe performance improvement is hugewhenever you face some latency issues with jsonconsider migrating to protobufare there any other advantages and disadvantagesas every decision that you takethere will be advantages and disadvantageswhen choosing one message format or protocol over anotherthis is not differentprotocol buffers suffers from a few issuesas ibelowlack of resourcesyou wont find that many resourcesdo not expect a very detailed documentationnor too many blog postsabout using and developing with protobufsmaller communityprobably the root cause of the first disadvantageon stack overflowfor exampleyou will find roughly 1500 questions marked with protobuf tagswhile json have more than 180 thousand questions on this same platformlack of supportgoogle does not provide support for other programming languages like swiftrscala and etcsometimesyou can overcome this issue with third party librarieslike swift protobuf provided by applenon-human readabilityas exchanged on text format and with simple structureis easy to be read and analyzed by humansthis is not the case with a binary formatalthough choosing protobuf will bring these disadvantages alongthis protocol is a lot fasteron some situationsas i demonstrated abovethere are a few other advantagesformal formatformats are self-describingrpc supportserver rpc interfaces can be declared as part of protocol filesstructure validationhaving a predefined and largerwhen compared to jsonset of data typesmessages serialized on protobuf can be automatically validated by the code that is responsible to exchange themhow do we use protobufnow that you already know that protobuf is faster than json and you also know its advantages and disadvantagess take a look on how to use this technologyprotobuf has three main components that we have to deal withmessage descriptorswhen using protobuf we have to define our messages structures inproto filesmessage implementationsmessages definitions are not enough to represent and exchange data in any programming languagewe have to generate classes/objects to deal with data in the chosen programming languageluckilygoogle provides code generators for the most common programming languagesparsing and serializationafter defining and creating protobuf messageswe need to be able to exchange these messagesgoogle helps us here againas long as we use one of the supported programming languages catch a glimpse of each of componentsprotobuf message definitionas already mentionedmessages on protobuf are describe inbelow you can find an example of the three message descriptors that i used in my performance testsi have defined all of them in the same filewhich i called peopleprotosyntax =proto3package demooption java_package =comauth0message people { repeated person person = 1}message person { string name = 1repeated address address = 2repeated string mobile = 3repeated string email = 4}message address { string street = 1int32 number = 2}the three messages above are very simple and easy to understandthe first messagecontains just a collection of person messagesthe second messagecontains a name of type stringa collection of address messagesa collection of mobile numbers that are hold as string andlastlya collection of email addressesalso hold as stringthe third messagecontains two propertiesthe first one is street of type stringand the second one is number of type int32besides these definitionsthere are three linesat the top of the filethat helps the code generatorfirst there is a syntax defined with the value proto3this is the version of protobuf that im usingwhichas the time of writingit is the latest versionit is important to note that previous versions of protobuf used to allow the developers to be more restrictiveabout the messages that they exchangedthrough the usage of the required keywordthis is now deprecated and not available anymoresecond there is a package demodefinitionthis configuration is used to nest the generated classes/objects createdthirdthere is a option java_package definitionthis configuration is also used by the generator to nest the generated sourcesthe difference here is that this is applied to java onlyi have used both configurations to make the generator behave differently when creating code to java and when creating code to javascriptjava classes were created on comprotobuf packageand javascript objects were created under demothere are a lot more options and data types available on protobufgoogle has a very good documentation on this regard over heremessage implementationsto generate the source code for the proto messagesi have used two librariesfor javai have used the protocol compiler provided by googlethis page on protocol buffersdocumentation explains how to install itas i use brew on my macbookit was just a matter of issuing brew install protobuffor javascripti have used protobufyou can find its source and instructions over herefor most of the supported programming languageslike pythons protocol compiler will be good enoughbut for javascriptjs is bettersince it has better documentationbetter support and better performance - i have also ran the performance tests with the default library provided by googlebut with it i got worse results than i got with jsonparsing and serialization with javaafter having the protocol compiler installedi generated the java source code with the following commandprotoc --java_out=/src/main/java//src/main/resources/peopleprotoi issued this command from the root path of the project and i added two parametersjava_outwhich defined/src/main/java/ as the output directory of java codeproto which was the path for theproto filethe code generated is quite complexbut fortunately its usage is notfor each message compileda builder is generatedcheck it out how easy it isfinal address address1 = addressnewbuildersetstreetstreet number+ isetnumberibuildfinal address address2 = addressfinal person person = personsetnameperson numberaddmobile111111222222addemailemailperson+ i +@somewhereotheremailpersonaddaddressaddress1address2these instances alone just represent the messagesso i also needed a way to exchange themspring provides support for protobuf and there are a few resources out there - like this one on springs blogand this one from baeldung - that helped me on that matterjust be aware thatas in any java projecta few dependencies are neededthese are the ones that i had to add to my maven project<dependencies>-- spring boot deps and etc above-->dependency>groupid>protobuf</groupid>artifactid>protobuf-java</artifactid>version>310</version>/dependency>protobuf-java-util<googlecodeprotobuf-java-format<4</dependencies>parsing and serialization with javascriptthe library usedhelped me to compile theproto messages to javascript and also to exchange these messagesthe first thing that i had to do was to install it as a dependencyfor thisi have used nodejs and npmnpm install -g protobufjsthe command above enabled me to use pbjs command line utilityclito generate the codethe following command is how i used this clipbjs -t static-module -w commonjs -o/src/main/resources/static/peopleprotoafter generating the javascript codei used another toolbrowserifyto bundle the generated code along with protobufjs in a single file# installing browserify globally to use wherever i wantnpm install -g browserify# running browserify to bundle protobufjs and message objects togetherjs -o/src/main/resources/static/bundlejsby doing that i was able to add a single dependency to my indexhtml filehtml>body>-- this has all my protobuf dependeciesthree messages and protobufjs codescript src=bundle>/script>/body>/html>finallyafter referencing the bundlei was then able to issue get and post requests to my protobuf endpointsthe following code is an angularjs http get request andas suchmust be very easy to understand// just a shortcutconst people = protobufrootsdefaultdemolet req = { methodgetresponsetypearraybuffer// make it clear that it can handle binary url/some-protobuf-get-endpoint}return $httpreqthenfunctionresponse{ // we need to encapsulate the response on uint8array to avoid // getting it converted to stringctrlpeople = peopledecodenew uint8arraydatathe post request is trivial as well// just populating some usual object literalslet address = new address{ streetstreetnumber100}let person = { namesome person[]mobileemail[]}push732-757-2923someone@somewhere// encapsulating the object literal inside the protobuf objectlet people = new people{ person[new person]}// building the post requestlet post = { methodposturl/some-protobuf-post-endpoint// transforming to binaryencodefinish// avoiding angularjs to parse the data to jsontransformrequestheaders{ // tells the server that a protobuf message is being transmittedcontent-typeapplication/x-protobuf}}// issuing the post request built above{ consolelogeverything went just finenot difficult to use protobufjs library to exchange binary datarightif you wantyou can also check the javascript codethat i used to compare protobuf and json performancedirectly on my github repoconclusioni have to be honesti was hoping to come across a more favorable scenario for protobufof coursebeing able to handleon a java to java communication50 thousand instances of person objects in 25ms with protobufwhile json took 150msis amazingbut on a javascript environment these gains are much lowerneverthelessconsidering that json is native to javascript enginesprotobuf still managed to be fasteralsoone important thing that i noticed is thateven though there are not many resources around about protobufi was still able to use it in different environments without having a hard timeso i guess i will start using this technology with more frequency nowhow about youwhat do you think about the speed of protobufare you considering using it in your projectsleave a comment", "image" : "https://cdn.auth0.com/blog/protobuf-json/logo.png", "date" : "January 31, 2017" } , { "title" : "Mozilla Replaces Persona with Auth0 for Identity and Access Management (IAM)", "description" : "Mozilla has chosen to replace their longstanding Persona authentication system with Auth0.", "author_name" : "Martin Gontovnikas", "author_avatar" : "https://www.gravatar.com/avatar/df6c864847fba9687d962cb80b482764??s=60", "author_url" : "http://twitter.com/mgonto", "tags" : "auth0", "url" : "/auth0-mozilla-partnership/", "keyword" : "bellevuewa - as of november 302016mozillaone of the largest open-source organizations on the webhas replaced their longstanding persona authentication system and has instead chosen auth0 for all of their identity and access managementiamneeds moving forwardthe removal of persona also means that mozilla will no longer offer a public-facing authentication serviceauth0 has already been integrated in many mozilla web properties including mozilliansmozilla moderatorand mozilla repsmozilla is making use of auth0s passwordless authenticationldapand social connections features to make it easier for contributors and employees to gain access to the various services mozilla provideswe are honored to have mozilla as both a customer and allyauth0 has always embraced and contributed to the open-source community and we built our platform around open standards like oauth and openid connect so that our platform can easily integrate and interoperate in any organization while still following industry standards and best practicesabout auth0auth0 provides frictionless authentication and authorization for developersthe company makes it easy for developers to implement even the most complex identity solutions for their webmobileand internal applicationsultimatelyauth0 allows developers to control how a persons identity is used with the goal of making the internet saferas of augustauth0 has raised over $24m from trinity venturesbessemer venture partnersk9 venturessilicon valley bankfounders co-opportland seed fund and nxtp labsand the company is further financially backed with a credit line from silicon valley bankfor more information visit https//auth0com or follow @auth0 on twitter", "image" : "https://cdn.auth0.com/blog/auth0-mozilla-pr/mozilla_logo.png", "date" : "January 30, 2017" } , { "title" : "Machine Learning for Everyone - Part 2: Spotting anomalous data", "description" : "Case study in R reviewing common concepts regarding how to validate, run and visualize a predictive model on production ranking the most suspicious cases.", "author_name" : "Pablo Casas", "author_avatar" : "https://s.gravatar.com/avatar/759facc84628c0cc0746d347f217218e?s=80", "author_url" : "https://twitter.com/datasciheroes", "tags" : "r", "url" : "/machine-learning-for-everyone-part-2-abnormal-behavior/", "keyword" : "overviewwere going to analyze data that contain cases flagged as abnormalso well build a predictive model in order to spot cases that are not currently flagged as abnormalbut behaving like ones that aretopics arecreating a predictive modelrandom forestintroduction to roc valuemodel performanceunderstanding of label and score predictioninspection of suspicious casesauditprediction and dimension reductiont-snelets startthis post contains r code and some machine learning explanationswhich can be extrapolated to other languages such as pythonthe idea is to create a case study giving the reader the opportunity to recreate resultsyou will need the followingdownload r enginedownload rstudio idenotethere are some points oversimplified in the analysisbut hopefully youll become curious to learn more about this topicin case youve never done a project like thisfirstinstall and load the packageslibrariescontaining the functions well use in this projectand load the data# delete these installation lines after 1st runinstallpackagescaretinstallfunmodelingrtsnelibrary## download data from githubif you have any problem go directly to githubhttps//githubcom/auth0/machine-learning-post-2-abnormal-behaviorurl_git_data =//rawgithubcom/auth0/machine-learning-post-2-abnormal-behavior/master/data_abnormaltxtdownloadfileurl_git_datadata_abnormal## reading source datadata=readdelimheader = tstringsasfactors = fsep =tthe data contains the following columnscolnamesdata## [1]idabnormalvar_1var_2var_3var_4## [7]var_5var_6var_7var_8we are going to predict column abnormal based on var_1var_8 variablesinspecting target variablefreqalmost 3 percent of cases are flagged as abnormalnextwe create the predictive model using random forestdoing the model parameter tuning with caret library using 4-fold cross-validation optimized for the roc metricwell come back to this lateryou can find the basics of random forest in the first post of this series################################################ ## model creation################################################setseed999## setting the validation metriccross-validation 4-foldfitcontrol = traincontrolmethod =cvnumber = 4classprobs = truesummaryfunction = twoclasssummary## creating the modelgiven the cross-validation methodfit_model = trainabnormal ~ var_1 + var_2 + var_3 + var_4 + var_5 + var_6 + var_7 + var_8data = datarftrcontrol = fitcontrolverbose = falsemetric =rocthere are some important things to note about the last layoutthe mtry column indicates a parameter that is optimized by the library caretthe selection is based on the roc metricthis metric goes from 05 to 1 and indicates how well the model distiguishes between true positve and false positive ratesthe higher the betterchoosing thebestmodelpredictive models have some parameters that can be tuned in order to improve predictions based on the input datain the last examplecaret chooses the best model configuration based on a desired accuracy metric---rocin this caserandom forest does not have many parameters to tune compared with other similar gradient boosting machinesthe parameter to tune was mtrycaret tested 3 different values of mtryand the value which maximizes the roc value is 6cross-validating results is really importantyou can get more information in ref[1]what is the roc valuethis is a long -long- topicbut here we try to introduce you to some aspects to start becoming familiar with itsome historyduring world war iiradar detected if an airplane was coming or notso they sent out the following alertsheya plane was spottedpositive predictionbe easyno plane is comingnegative predictionthese outcomes can be true or falseso we have four possibilites1- if the detection of heyis trueit is called a true positive2- on the other handif heyis falsethe result is a false positivewhich implies that the radarpredictive modelsaid a plane was coming but there isnt actually a plane out there3- the radar says be easyand the real result is positiveits a true negative4- the radar says be easyand the real result is negatives a false negativeabnormal example datain the data we used to build the modelwe havepositive is when abnormal=yesnegative is when abnormal=nonormally we associate the positive value with the less representative valuewhich is the one we are trying to explain and the least commonanalysis of possibilitespoints 1 and 3 imply the radaror the predictive modelasserts the predictionwith points 2 and 4the model failed---it predicted one thing and it was actually the oppositethe roc value measures the trade-off between the true positive and true negative ratesthis is because we need to be sure about what the model is saying when it detects a positive outcomeis this positive prediction reliable or notusage in medicinethis accuracy metric is so powerful that it is used in other fieldssuch as medicineto measure the accuracy of certain diagnosesthe flu test for this patient was positiveif this result is confirmed after a while or a second testve got a true positveit is used in many tests in which the result is either true or falses very important to know if we can trust this resultusage in auth0we did some proof of concept to automatically spot the most suspicious login cases in order to boost current anomaly detection featureand roc curve was a good option to test the predictive model sensitivitymore info about current anomaly detection feature at ref[2]understanding the extremesknowing extreme roc values is a good approach to better understanding itan roc value of 1 indicates all the positive values returned by the model are correctperfect modelan roc value of 05 indicates all the positive values returned by the model have results similar to random guessingthis is the worst modelsee more about roc in ref[3]going back to the predictive modelwe talked about the predictive model outputwhich is something likepositive / negative or yes / nobut its better to work with probabilitiesalso known as scoreso the output of any predictive modelin binary or multi-label class predictionshould be the scorefor a longer explanationsee ref[4]once we get the score valuewe can select all cases above a threshold and label them as abnormalwe assign the probability of being abnormal=yes to each casedata$score=predictfit_model$finalmodeltype=prob[2]we can kept with those cases with are actually abnormal=no# filtering casesdata_no_abnormal=subsetabnormal==nowe obtain the top 2% of cases with the highest score for being abnormal# obtaining the score to filter top 2%cutoff=quantiledata_no_abnormal$scoreprobs = c098# filtering most suspicious casesdata_to_inspect=subsetdata_no_abnormalscore>cutoffand here weve got in data_to_inspect ~ 60 cases which are actually no abnormalbut they are showing a behavior of being suspiciousdisplaying only the first 6 suspicious ids to further inspectheaddata_to_inspect$id## [1] 59 94 105 107 224 259is there a way to visualize how the predictive modelseesdata and assigns the probabilitiess talk about projectionsweve built a machine learning model in order to know the abnormal casesusing random forestpredictive models handle several variables at the same timewhich is different from the classical reporting approachin which you see two or three variables in one plotin the following exampleyoull see how eight variables are mapped in only two dimensionsin this case there are eightbut it could be thousandsthis is quite fancy since it is like a compression of the informationve probably already seen thiscomplextechniquein geographical mapsthey map three dimensions into twothere are several ways to do this with datathe most popular is probably principal component analysis -aka pca-howeverthis doesnt lead to good visualizationsthe one we used in this post is t-distributed stochastic neighbor embeddingwhich is also implemented in languages other than rmore info at ref[5]google did a live demo based on text data[6]hands on rfirst some data preparation--excluding id and score variables to create the t-sne modelalsoconverting all string variables into numeric ones since t-sne doesnt support this type of variableone-hot encoding or dummy variablesactuallyre going to map 17 variablesnot eightbecause of this transformation# excluding id column and score variabledata_2=data[names%in% cscore]d_dummy = dummyvars~data = data_2data_tsne = dataframepredictd_dummynewdata = datanow we create the t-sne modeladding it to the score variable we created beforesettsne_model = rtsneasmatrixdata_tsnecheck_duplicates=falsepca=trueperplexity=30theta=05dims=2d_tsne = astsne_model$yd_tsne$abnormal = asfactordata$abnormald_tsne$score=data$scored_tsne = d_tsne[orderd_tsne$abnormal]now the magicplotting the resulting t-sne model which maps 17 variables in two dimensionsggplotd_tsneaesx=v1y=v2color=abnormal+ geom_pointsize=025+ guidescolour=guide_legendoverrideaes=size=6+ xlab+ ylab+ ggtitlet-sne on abnormal data+ theme_lightbase_size=20+ themeaxistextx=element_blanky=element_blankdata=d_tsne[d_tsne$abnormal==yes]color=blackalpha=1shape=21&d_tsne$score>=cutoffbluealpha=08shape=5color=class+ scale_colour_brewerpalette =set2but what are we seeing thereanalysisblue rhomboids represent the cases which are actually not flagged as abnormalbut are flagged ashighly abnormalby the predictive modelthis is what was analyzed beforepink points represent the cases which are actually flagged as abnormalgreen points are the cases which are not actually abnormalhaving a low likelihood of being one of themsimilar cases tend to be closer in the plotpink cases are close to the blue onesthese are the cases predicted as highly suspicious and that share similar behavior with the ones actually flagged as abnormalthats why we left the score variable out of the t-sne model creationthis model put closely together some cases that were actually abnormal and not abrnormaland this makes sense since these cases were flagged by the random forest as suspicious oneslook at the biggest islands made of green points---there are no abnormal pointsflagged or predictednear themconclusionsthe cases flagged as abnormalplus the top 2 percent of suspicious ones detected by the random forestare mapped closer together away from the normal casesbecause they behave differentlythis is one way of uncovering the information in the datatime to play with your own datareferences[1] why validate predictive modelsknowing the error[2] anomaly detection feature at auth0[3] wikipedia roc curve[4] data science live book - data scoring[5] original implementation t-distributed stochastic neighbor embedding[6] open sourcing the embedding projectora tool for visualizing high dimensional datalink to clone from githubonly r code", "image" : "https://cdn.auth0.com/blog/machine-learning-for-everyone/logo.png", "date" : "January 27, 2017" } , { "title" : "Testing React Applications with Jest", "description" : "Learn how to test React applications with the Jest JavaScript testing framework.", "author_name" : "Joyce Echessa", "author_avatar" : "https://s.gravatar.com/avatar/f820da721cd1faa5ef4b5e14af3f1ed5", "author_url" : "https://twitter.com/joyceechessa", "tags" : "react", "url" : "/testing-react-applications-with-jest/", "keyword" : "introductionwriting tests is an integral part of application developmenttesting results in software that has fewer bugsmore stabilityand is easier to maintainin this articlewell look at how to test a react application using the jest testing frameworkjest is a javascript test runner maintained by facebooka test runner is software that looks for tests in your codebaseruns them and displays the resultsusually through a cli interfacethe following are some of the features that jest offersperformance - jest run tests in parallel processes thus minimizing test runtimemocking - jest allows you to mock objects in your test filesit supports function mockingmanual mocking and timer mockingyou can mock specific objects or turn on automatic mocking with automock which will mock every component/object that the component/object test depends onsnapshot testing - when using jest to test a react or react native applicationyou can write a snapshot test that will save the output of a rendered component to file and compare the components output to the snapshot on subsequent runsthis is useful in knowing when your component changes its behaviourcode coverage support - this is provided with jest with no additional packages or configurationtest isolation and sandboxing - with jestno two tests will ever conflict with each othernor will there ever be a global or module local state that is going to cause troublesandboxed test files and automatic global state resets for every testintegrates with other testing libraries - jest works well with other testing librariesegenzymechaijest is a node-based runner which means that it runs tests in a node environment as opposed to a real browsertests are run within a fake dom implementationvia jsdomon the command lineyou should note though that while jest provides browser globals such as window by using jsdomtheir behavior is only an approximation of their counterparts on a real browserjest is intended for unit testing an applications logic and components rather than for testing it for any dom quirks it might encounterfor thisit is recommended that you use a separate tool for browser end-to-end teststhis is out of scope of this articlesetting up the sample projectbefore looking at how tests are writtenlets first look at the application well be testingit can be downloaded herein the downloaded folderyou will find three projects - one named starter with no test filesanother named completed with the test files included and another named completed_with_auth0 which contains test files and also adds authentication to the applicationll start with the starter project and proceed to add tests to itthe sample application is a simple countdown timer created in reactto run itfirst navigate to the root of the starter project$ cd path/to/starter/countdowntimerinstall the necessary libraries$ npm installrun webpack$ webpackthen run the application with$ npm startnavigate to http//localhost3000/ in you browseryou should see the followingyou can set a time in seconds and start the countdown by clicking on the start countdown buttonthe functionality of the countdown timer has been separated into three components stored in the app/components folder namely clockjsxcountdownjsx and countdownformthe clock component is responsible for rendering the clock face and formatting the users input to an mmss formatthe countdownform component contains a form that takes the user input and passes it to the countdown component which starts decrementing the value every secondpassing the current value to the clock component for displayhaving looked at the sample applicationll now proceed with writing tests for itwriting testslets start by installing and configuring jestrun the following command to install jest and the babel-jest library which is a jest plugin for babelthe application uses babel for transpiling jsx and es6 so the plugin is needed for the tests to work$ npm install --save-dev jest babel-jestwith babel-jest addedjest will be able to work with the babel config filebabelrc to know which presets to run the code throughthe sample application already has this fileyou can see its contents below{presets[es2015react]}the react preset is used to transform jsx into javascript and es2015 is used to transform es6 javascript to es5with that donewe are now ready to write our first testjest looks for tests to run using the following conventionsfiles withtestjs suffixspecjs suffix inside a folder named testsother thanjs filesit also automatically considers files and tests with the jsx extensionfor our projectll store the test files inside a tests folderin the app foldercreate a folder named __tests__for the first testll write a simple test that ensures that jest was set up correctly and that it can run a test successfullycreate a file inside the app/__tests__ foldername it appjsx and add the following to the filedescribeapp=>{ itshould be able to run tests{ expect1 + 2toequal3}to create a testyou place its code inside an itor testblockincluding a label for the testyou can optionally wrap your tests inside describeblocks for logical groupingjest comes with a built-in expectglobal function for making assertionsthe above test checks if the expression 1 + 2 is equal to 3read this for aof assertions that can be used with expectnextmodify the test property of the packagejson file as shownjestyou can now run the added test with npm test and see the results in the terminalyou can also run jest in watch mode which will keep it running in the terminal and save you from having to start the tests yourself when you make changes to the codefor this use the npm test -- --watch commandanything that is placed after the first -- is passed to the underlying commandtherefore npm test -- --watch is similar to jest --watchwe only have one test so farbut as you go further into testing your application and add more testsyou might want to exclude some from runningjest allows you to either exclude some tests from running or focus on specific teststo exclude a test from being executeduse xitinstead of itto focus on a specific test without running other testsuse fitnow that we know the application can run testss move on to testing its componentstesting componentsin the __tests__ folderadd another folder named components and inside that folderadd a file named clockthen add the following to the fileimport react fromimport reactdom fromreact-domimport clock fromclockrenders without crashing{ const div = documentcreateelementdivreactdomrender<clock/>this test mounts a component and checks that it doesnt throw an exception during renderingif you run the testit will fail with the error message cannot find modulefromin the application we specify aliases for some files so that we dont have to write their full path every time we import them in another filethe aliases and the files they represent are specified in the webpackconfigjs fileresolve{ root__dirnamealias{ applicationstylesapp/styles/appscssapp/components/clockapp/components/countdowncountdownformapp/components/countdownformextensionsjs]}other test runners like karma are able to pick up the applications setting from the webpack config filebut this is not the case with jestjest doesnt automatically work with webpackin the above caset know how to resolve the aliases specified in the webpack config fileto solve thisyou can either use a third party tool like jest-webpack-alias or babel-plugin-module-resolveror you can add the aliases in jests configuration settingsi prefer the latter solution as it is easier to setup and it requires the least modification to the appwith thisjest settings are separate from the apps settingsif i ever wanted to change the test runner usedi would just need to delete jest settings from packagejsonor from the jest config fileand wont have to edit the webpack config file and babel config fileyou can define jests configuration settings either in the packagejson or create a separate file for the settings and then add the --config <path/to/config_file>option to the jest commandin the spirit of separation of concernsll create a separate filecreate a file at the root of the project named jestjs and add the following to itmodulefileextensions]modulenamemapperrootdir>/app/components/clock/app/components/countdownform/app/components/countdown}}modulefileextensions specifies an array of file extensions your modules useby default it includes [nodeif you require modules without specifying a file extensionjest looks for these extensionsso we dont really need the setting in the above file as js and jsx are includedi wanted to include it so you know that it is necessary if your project consists of files with other extensions eif you are using typescriptthen you would include [tstsxin modulenamemapperwe map different files to their respective aliasesrootdir is a special token that gets replaced by jest with the root of the projectthis is usually the folder where the packagejson file is locatedunless you specify arootdir option in your configurationif you are interested in finding out other options that you can set for jestcheck out the documentationin packagejson modify the value of test as shownjest --config jestrun the test again with npm test and it should now passwe now know that the component tested renders without throwing an exceptiontesting business logicweve written a test that assures us that our component renders properlythis however is not an indicator that the component behaves as it should and produces the correct outputto test for thisll test the components functions and make sure they are doing what they should be doingfor this well use the enzyme library to write the testsenzyme is a javascript testing utility for react created by airbnb that makes it easier to assertmanipulateand traverse a react components outputit is unopinionated regarding which test runner or assertion library usedand is compatible with the major test runners and assertion libraries availableto install enzymerun the following command$ npm install --save-dev enzyme react-addons-test-utilsthen modify clockjsx as shownimport { shallow } from{ itshould render the clock{ const clock = shallowclock timeinseconds={63}/>const time = <span classname=clock-text>0103</span>expectcontainstimetrueformattimeshould format secondsconst seconds = 635const expected =1035const actual = clockinstancesecondsactualtobeexpecteditshould format seconds when minutes or seconds are less than 10const seconds = 6505the first test remains the samebut since we are using enzyme you could simplify it by using shallowor mountto render itlike soimport { mount } from{ mountthe difference between shallowand mountis that shallowtests components in isolation from the child components they render while mountgoes deeper and tests a components childrenfor shallowthis means that if the parent component renders another component that fails to renderthen a shallowrendering on the parent will still passthe remaining tests test the clockjsx renderand formattimefunctionsplaced in separate describe blocksthe clock components renderfunction takes a props value of timeinsecondspasses it to formattimeand then displays the returned value inside a <span>with a class of clock-textin the test with the describe label of renderwe pass in the time in seconds to clock and assert that the output is as expectedthe formattime describe contains two teststhe first checks to see if the formattimefunction returns a formatted time if given a valid input and the second ensures that the function prefixes the minutes or seconds value with 0 if the value is less than 10to call the components function with enzymewe use clockreturns the instance of the component being rendered as the root node passed into mountor shallowrun the tests and they should all passnext well add tests for the countdown componentcreate a file named countdownjsx in the app/__tests__/components folderadd the following to the fileimport testutils fromreact-addons-test-utilsimport countdown fromcountdown/>handlesetcountdowntimeshould set countdown time and start countdowndone{ const countdown = testutilsrenderintodocumentstatecountcountdownstatus1settimeout{ expect91001should never set countdown time to less than zero03000the first test is similar to what we had in clockit just checks that the countdown component rendered okaythe rest of the tests test the handlesetcountdowntimefunction of this componentthis function is called when the form is submitted and is passed the number of seconds enteredif validit then uses this to set the components state which consists of two values - the count and the countdownstatuscomponentdidupdatechecks if the countdownstatus was changed and if so calls the tickfunction which starts decrementing the value of count every secondin the above we use testutils to test the componentwe could have used enzyme functions here as wellbut we wanted to showcase another great tool that makes testing react components easierfacebook recommends both enzyme and testutilsso you can decide which you preferor you can use them bothin factwhen using enzymeyou are essentially using testutils as well since enzyme wraps around the react-addons-test-utils librarywith testutilscomponents are rendered with testutilsthe first test in the block ensures that the countdownstatus of the component is changed when a valid time is passed to handlesetcountdowntimeand that the count has been decremented by 1 after a secondthe second test ensures that handlesetcountdowntimestops counting down at 0testing eventsthe last component remaining to test is countdownformthis contains a form that the user uses to enter the time to be count downll test it to make sure that when a user submits the formsthe listener will call onsetcountdowntimeonly if the input is validcreate a file named countdownformimport countdownform fromcountdownform/>should call onsetcountdowntime if valid seconds entered{ const spy = jestfnconst countdownform = testutilscountdownform onsetcountdowntime={spy}/>const form = testutilsfindrendereddomcomponentwithtagformrefsvalue =109testutilssimulatesubmitspytohavebeencalledwithshould not call onsetcountdowntime if invalid seconds entered1h63nottohavebeencalledin the above we use testutils to simulate the form submit eventjest comes with spy functionality that enables us to assert that functions are calledor not calledwith specific argumentsa test spy is a function that records argumentsreturn valuethe value of this and exception thrownif anyfor all its callstest spies are useful to test both callbacks and how certain functions are used throughout the system under testto create a spy in jestwe use const spy = jestthis provides a function we can spy on and ensure that it is called correctlywe then render the countdownform component and pass in the spy as the value of the onsetcountdowntime propswe then set the forms seconds value and simulate a submissionif the value for seconds is validthe spy will be calledotherwise it wontrun the tests and everything should passcoverage reportingas mentioned earlierjest has an integrated coverage reporter that works well with es6 and requires no further configurationyou can run it with npm test -- --coveragebelow you can see the coverage report of our testssnapshot testingsnapshot testing is another feature of jest which automatically generates text snapshots of your components and saves them to disk so if the ui output changes later onyou will get notified without manually writing any assertions on the component outputwhen running a snapshot test for the first timejest renders the component and saves the output as a javascript objecteach time the test is run againjest will compare its output to the saved snapshot and if the components output is different from the snapshotthe test will failthis may be an indicator that the component has a bug somewhere and you can go ahead and fix it until its output matches the snapshotor you might have made the changes to the component on purpose and so it is the snapshot that will need updatingto update a snapshot you run jest with the -u flagwith snapshot testingyou will always know when you accidentally change a components behaviour and it also saves you from writing a lot of assertions that check if your components are behaving as expectedll include one snapshot test for the clock component in the sample appyou can include the snapshot test in the clockbut i prefer to have my snapshot tests in separate filescreate a file named clocksnapshotimport renderer fromreact-test-rendererclock component renders the clock correctly{ itrenders correctly{ const seconds = 63const rendered = renderercreateclock timeinseconds={seconds}/>renderedtojsontomatchsnapshotthe above renders the clockwith a value of 63 seconds passed into itand saves the output to a filebefore running the testinstall the following packageit provides a react renderer that can be used to render react components to pure javascript objects$ npm install --save-dev react-test-rendererrun your tests and the output will show that a snapshot has been addedwhen you look at your projectthere will be a __snapshots__ folder inside the app/__tests__/components folder with a file named clocksnap inside itthe following are its contentsexports[`clock component renders the clock correctly renders correctly 1`] = `<div classname=span classname=03 </div>`as you can seeit shows the expected result of having passed 63 to the clock componentwith the snapshot test that we just addedwe dont need the test in clockjsx that checks if the rendered output contains a <with a certain string in ityou should include the __snapshots__ folder in your versioning system to ensure that all team members have a correct snapshot to compare withasideusing react with auth0before concluding the articles take a look at how you can add authentication to the react app and ensure the tests work with thisll change the app so that it requires the user to be logged in before they can start the countdown timerin the processll take a look at a caveat that jest has as a node-based test runner that runs its tests on jsdomto get startedfirst sign up for an auth0 accountthen navigate to the dashboardclick on the new client button and fill in the name of the clientor leave it at its defaultselect single page web applications from the client typeon the next pageselect the settings tab where the client idclient secret and domain can be retrievedset the allowed callback urls and allowed originscorsto http3000/ and save the changes with the button at the bottom of the pagell add the auth0 lock widget to our appwhich provides an interface for the user to login and/or signupcreate a folder named utils in the app folder and add a authservicejs file to itimport auth0lock fromauth0-lockimport decode fromjwt-decodeexport default class authservice { constructor{ // configure auth0 thisclientid =your_client_idthisdomain =your_client_domainlock = new auth0lockclientiddomain{}// add callback for lock `authenticated` event thislockonauthenticated_doauthenticationbind// binds login functions to keep this context thislogin = thislogin} _doauthenticationauthresult{ // saves the user token thissettokenidtoken} getlock{ // an instance of lock return new auth0lock} login{ // call the show method to display the widgetshow} loggedin{ // checks if there is a saved token and its still valid const idtoken = thisgettokenreturn idtoken &&istokenexpired} settoken{ // saves user token to localstorage localstoragesetitemid_token} gettoken{ // retrieves the user token from localstorage return localstoragegetitem} logout{ // clear user token and profile data from localstorage localstorageremoveitem} gettokenexpirationdateencodedtoken{ const token = decodeiftokenexp{ return null} const date = new datedatesetutcsecondsreturn date} istokenexpired{ const expirationdate = thisgettokenexpirationdatereturn expirationdate <new date}}authentication will be handled by this classthe code contains comments that explain what is happening at each stepso i wont go over it herereplace your_client_id and your_client_domain in the above code with your auth0 client detailsinstall the following two packages$ npm install --save auth0-lock jwt-decodeauth0-lock provides the lock widget while jwt-decode is used in the code to decode a json web token before checking if its expiration date has passedmodify countdownformimport authservice from/utils/authserviceclass countdownform extends reactcomponent { constructorprops{ superstate = { loggedinfalse }} componentdidmount{ thisauth = new authservicesetstate{ loggedinauthloggedin// instance of lock thislock = thisgetlock{ thislogout} onsubmit{ epreventdefault{ var secondsstr = thisvaluesecondsstrlength >0 &match/^[0-9]*$/{ thisonsetcountdowntimeparseint} } else { alertyou need to log in first} } render{ const authbutton = thisdiv>button classname=button expandedonclick={this}>logout</button>login<returnform ref=onsubmit={thisonsubmit} classname=countdown-forminput type=textref=placeholder=enter time in seconds/>classname=button success expandedvalue=start countdown/form>{ authbutton } <}}export default countdownformin the abovewe add a loggedin state to the component that will keep track of the users authentication statuswe instantiate a authservice object and use this to make an instance of the lock widgetwe set a callback function that will be called after authentication with thiscband in this function we change the loggedin state to trueon log outthis will be set to falsein the render buttonwe check the loggedin state and add a login button if its value is false and a logout button otherwisethese buttons are bound to the loginand logoutfunctions respectivelywhen the form is submittedwe first check if the user is authenticated before proceeding with the countdownif they arenan alert is displayed that lets them know they need to be logged inrun webpack to process and bundle the javascript files and then start the app$ webpack$ npm startwhen you navigate to http3000/you will see the added login buttonon clicking the buttonthe lock widget will be displayeduse its sign up tab to create an accountafter signing upyou will be automatically logged intherefore you will be able to perform a countdown and the bottom button will now be the logout buttonthat works finebut if you run the teststhere will be several failing onesif you take a look at the error messagesyou will see referenceerrorlocalstorage is not defined several timeswe mentioned earlier that jest is a node-based runner that runs its tests in a node environmentsimulating the dom with jsdomjsdom does a great job in replicating a lot of dom featuresbut it lacks some browser featuresfor exampleat the time of writing thisthe current version of jsdom doesnt support localstorage or sessionstoragethis is a problem for us because our app saves the authentication token it gets back from auth0 to localstorageto get around this limitationwe can either create our own implementation of localstorage or use a third party one like node-localstoragesince we only require a simple version of localstoragell create our own implementationto be able to saveretrieve and remove a token to localstoragewe only require the setitemkeyand removeitemfunctions of the the storage interfaceif your application requires other localstorage featuress better to use the third party optioncreate a file in the utils folder named localstoragemoduleexports = { setlocalstoragefunction{ globallocalstorage = { getitem{ return this[key]{ this[key] = value{ delete this[key]} }const jwt = requirejsonwebtokenconst token = jwtsign{ foobarmathfloornow/ 1000+ 3000 }shhhhhlocalstorage}}we create an object with the three required functions and assign it to globalwe then create a tokenset an expiration date to it and save it in localstorage as the value of the id_token keythe token will be decoded in authservice and its exp attribute checked to determine if it has expiredyou should note that jwt-decode doesnt validate tokensany well formed jwt will be decodedif your app uses tokens to authorize api callsyou should validate the tokens in your server-side logic by using something like express-jwtkoa-jwtowin bearer jwtetcyou can create a test account and perform a real login during testingbut i prefer to not make unneccesary network calls during testingsince we arent testing the login functionalityi deem it unnecessary to perform authentication with the auth0 servertherefore we create afaketoken with a exp attribute that will be checked by the app$ npm install --save-dev jsonwebtokenadd the following to the countdownformjsx and countdownjsx components inside their outer describeblocks before all the itand inner describeblocksbeforeall{ const ls = require//utils/localstoragelssetlocalstoragerun the tests with npm test and they should all passconclusionweve looked at how to use jest as a test runner when testing a react applicationfor more on jestbe sure to check its documentation", "image" : "https://cdn.auth0.com/blog/testing-react-with-jest/logo.png", "date" : "January 26, 2017" } , { "title" : "How To Build Your User Analytics Funnel With Social Login", "description" : "How to collect, send and analyze your user data for growth", "author_name" : "Diego Poza", "author_avatar" : "https://avatars3.githubusercontent.com/u/604869?v=3&s=200", "author_url" : "https://twitter.com/diegopoza", "tags" : "marketing", "url" : "/how-to-build-your-user-analytics-funnel-with-social-login/", "keyword" : "introcompanies that want to make data-driven decisions know they need to learn as much about their users as they canbut great companies know that the best way to get that data is not by asking — its by building a funnel to collect whats already out there quickly and efficientlythats why social login is one of the most powerful analytics toolswith the option to log into your app using a social media account theyve already set upusers save themselves the annoying step of creating a new username &password combinationand you save them the effort of telling you who they arewith a few simple tools and pieces of codeyou can use the information theyve made publicly available to make better decisions about your marketingproduct developmentand user retentionlets go step-by-step through what you need to start using your user data betterfrom signup and login to analysis1set up social loginthe first thing you have to decide is what kinds of social media platforms you want to support with your social loginauth0 can automatically reconcile the different headers and response formats of different social apisso as the developer you dont need to think about which youtechnicallycan and cant enablewhat you should think about are the kinds of social profiles that your users want to useboth in terms of popularity and data accessibilityboth facebook and google are sure betsthe two of them together represent more than 3/4 of all social logins on the webif youre working on a fundamentally social app — messagingcommunicationentertainment — it would be hard to avoid putting them on your sitebut if youre working on anything specialized — from developer toolsto marketing &salesto file-sharing — there are going to be otherbetter options for getting the best kind of user dataimagine pulling info on all of a devs repositories and commits from their github or bitbucket profileor automatically integrating all of a users dropbox uploads into your teamwork collaboration toolimagine pulling your new signups social graph so you can show them all their friends using your platformor immediately give them content that they can start interacting withthe social logins you support shouldnt just make it easier for people to login — they should integrate with platforms that will help you build a better user experience2set up rulesthe kind of information you collect is going to vary depending on how you want to use the data and the platforms youre collecting it frombut youll collect it in the same way regardlesss say your users are signing up with your twitter social login option and you want to collect what country is the user inall collection in auth0 is done through rules — snippets of javascript executed on the backend every time a user is authenticatedif you wanted to collect the current country from all users logging in through twitteryoud set up the add country to the user profile auth0s rule and then use this line of code to get the countryvar country = usercountryusing auth0s segment ruleyou could then send that data to segment and then to the email marketing tool of your choicefunctionusercontextcallback{ ifsignedup{ sendeventlogin} else { sendeventsignup} function sendevente{ var siotrack = { secretyour segmentio secretuseriduser_ideventproperties{ applicationclientnameipagentuseragent }{providersallfalse } } }request{ methodposturlhttps//apisegmentio/v1/trackheaderscontent-typeapplication/json}bodyjsonstringifysiotrackerrresponse{ ifreturn callbackife ==={ userpersistentsignedup = true} callbacknull}}you could use additional rules to collect each users estimated median incomebased on their ip addresss zip codelink accounts with the same email addressand more3analyze and use your dataauth0as a clearing house for all user authentication in your appcan operate as a single source of truth for user datawhen data is being drawn from discrete user identitieswith rules being executed on the backendtheres no risk of errant tags or false positivesthat makes auth0 particularly suited for analysis that demands a high level of precisionimagine you want to segment all your users by ageincomegenderregioninterestsand marital status in order to analyze who should receive a pre-launch email announcing a new coupon codewith social loginenriching your user profiles with that kind of data is painlessand there are endless applications for this kind of enrichmentpersonalized onboardingif everyone from product managers to marketers is using your saas productuse role attribution to send people to onboarding flows designed for their specific needsretention analysissegment your user base by activity and look at what kinds of users tend to stick around the longestwho takes the most advantage of your appand who you should be trying to re-engagebuilding customer personasgrouping your customers into representative personas is a powerful way to focus your marketing and product development effortsbut you dont need to do it all by intuition when you can use analytics to build quantitative models of who your users arewhere they come fromand what they doauth0 users can use our pre-built rules to send user information to a variety of applicationsslackslack is more than a communication toolwith the right integrationsit can become more like a hub for all your critical business activitiesyou can notify all users of a slack channel of your choice with our slack rule{ // short-circuit if the user signed up alreadyifstatsloginscount >// get your slacks hook url from//slackcom/services/10525858050var slack_hook =your slack hook urlvar slack = requireslack-notifyslack_hookvar message =new user+nameemail+ useremail +var channel =#some_channelslacksuccess{ textmessagechannelchannel }// dont wait for the slack api call to finishreturn right awaythe request will continue on the sandbox`callback}mixpanelmixpanel is an analytics provider that allows you to look at user behavior in both mobile and web applicationsyou can look at how specific features in your app are performingwhat sets apart the users who come back to your app day in and day out from those who dontthe rule below sends a sign in event to mixpanel every time a unique user logs into your appcheck out mixpanels http api for more information{ var mpevent = {sign indistinct_idtoken{replace_with_your_mixpanel_token}applicationclientname } }var base64event = new buffermpeventtostringbase64get{ urlhttpmixpanelcom/track/qs{ database64event } }rb{ // dont wait for the mixpanel api call to finish}fullcontactfullcontact is contact management software thats used to unifyde-dupe and clean lists of contacts — a big pain point for sales and marketing - heavy organizationsnot to mention media companiesour fullcontact rule allows you to get a users profile from fullcontact using their email addressitll add a fullcontactinfo property to their user_metadata if their information is availablefor moresee the fullcontact api docs{ var fullcontact_key =your fullcontact api keyvar slack_hook =// skip if no email if// skip if fullcontact metadata is already there ifuser_metadata &&user_metadatafullcontactcom/v2/person{ emailapikeyfullcontact_key } }error{ ifresponse &statuscode== 200{ slackalert{ channel#slack_channeltextfullcontact api errorfields{ errorstatuscode ++ body} }// swallow fullcontact api errors and just continue login return callback} // if we reach hereit means fullcontact returned info and well add it to the metadata useruser_metadata = user{}fullcontact = jsonparseauth0usersupdateusermetadata}there are endless ways to use customer data to build a better applicationthe key is to keep experimenting until you find something that really works — when you find thatdouble downthere are no magic bulletsall you can do is look for an edgeusing analyticshoweveryou can find that edge a lot faster", "image" : "https://cdn.auth0.com/blog/user-analytics-funnel-social-login/post-logo.png", "date" : "January 25, 2017" } , { "title" : "Optimizing the Performance of Your React Application", "description" : "Optimizing your React application is simple thanks to a few easy-to-learn techniques.", "author_name" : "Alex Sears", "author_avatar" : "https://s.gravatar.com/avatar/6c0654e56c8c73ffee8f76fe03d18ccf?s=80", "author_url" : "http://twitter.com/searsaw", "tags" : "react", "url" : "/optimizing-react/", "keyword" : "tldr profiling your react code is simple using the tools providing by the react-addons-perf packageonce you know where react is wasting timeyou can improve the performance by using the correct keysimplementing shouldcomponentupdate in your componentsand extending from purecomponent instead of regular componentreact is fastlikehella fastthe core team spends lots of time and money making sure that react only makes the changes to the dom that actually need to be made based on changes in statehoweveras developerswe need to be aware that the code we write and the way we write it have huge impacts on the performance of our applicationswe cant just expect the framework to be able to figure everything outthere are two types of wasted operations that can happen in reactthe first is calculating pieces of the virtual dom that wont changethe second is making changes to the actual dom when those changes are not necessarywe are going to take a look at a small application that can definitely be optimizedthere is a form we can use to change the desired colorthen a bunch of boxes with star wars charactersnames are printed to the screen underneath the formwe will start by getting some data printed to the console so we can tell if our optimizations are workingthen we will implement different ways of optimizationlets get to itnotei will be using node version 692 for this postsetupfirstwe need to set things upclone the repo that holds the initial code for this application and install all the dependencies using npmfor those who are familiar with yarnyou can use it to install the dependencies instead of npmif youd rather just download the sourceyou can get that from the github repogit clone https//githubcom/searsaw/optimizing-reactgitcd optimizing-reactnpm install # or `yarn` if you have itnpm run serve # or `yarn run serve`open up a browser to localhost8080 and get to know the application a bitif you enter a color such as red or a hex color such as #ff0000 and hit thechange colorbuttonit will change the background of every third square to that colorif you click on a squareit will be removed from the groupprofiling the applicationnow that we have the application on our machineswe need a way to profile it to see where react is wasting timeluckilythe team over at facebookheard of themhas created a package called react-addons-perfits pretty simple to usewhen we are ready for it to start profilingwe call perfstartwe do some actions to profile and then call perfstoponce we have some profiled datathere are a few tables we can output to see some of this datathe two we will concentrate on will be perfprintwastedand perfprintoperationsin this casewasted meansas stated beforethat react calculated the new virtual domcompared it to the old oneand saw that there were some pieces it calculated that didnthis means it wasted time creating the new virtual dom for pieces of the page that wont change at alloperations are the changes react made to the actual dom to make the it mirror the virtual domnpm install --save-dev react-addons-perf # or `yarn add --dev react-addons-perf`i have created a simple component that wraps react-addons-perf to make working with it much easiercreate a file at src/components/perfprofiler/indexjs and put the following in itimport react fromreactimport perf fromreact-addons-perfimport styles from/stylescssclass perfprofiler extends reactcomponent { constructorprops{ superthisstate = { startedfalse }} toggle ==>{ const { started } = thisstatestartedperfsetstate{ startedstarted }} printwasted ={ const lastmeasurements = perfgetlastmeasurementslastmeasurements} printoperations =} renderreturn <div classname={stylesperfprofiler}><h1>performance profiler</h1>button onclick={thistoggle}>{started}</button>printwasted}>print wasted<printoperations}>print operations</div>}}export default perfprofilernext create another file alongside this one at src/components/perfprofiler/stylescss and put the following in itperf-profiler { displayflexflex-directioncolumnpositionabsoluteright50pxtop20pxpadding10pxbackground#bada55border2px solid blacktext-aligncenter}perf-profiler >h1 { font-size15embutton { displayblockmargin-top5px}lastlywe need to add the perfprofiler to the applicationin src/components/appjsadd the followingimport perfprofiler from/perfprofiler// this import should be at the top with the restdiv id=container>perfprofiler />-- add this line to put the profiler on the page --->form-containerform onsubmit={thisonformsubmit} />if you restart the dev server and view our application in the browser againyou will see a box in the top right cornerwe will use this to turn the profile on and off and to output our data to the developers consoleif you presschange the colorpressand then clickprint wastedyou will be able to see a table printed out in the developers console that shows all the wasted calculations react had to performthe importance of keys in listsif you open the console nowyou will see a warning from reacttelling us that we need to have a key prop on each item in ourof characters so react can identify each item individuallyso lets do as it says and add a key propopen up src/components/characterlist/indexjs and add a key prop that is equal to the index of the character in the array we are iterating over{charactersmapcicharacter key={i} character={c} style={getstylescolor} onclick={thisremovecharacter} />}that should take care of the warningnows see if we have any wasted operations when we remove a character by clicking on itto profile thisclick onon the profilerthen click on a characteryoure better off clicking on one near the topthen click ons see if there were any wasted calculations by clicking onyou will see that there are a couple wasted operations that occurredthe instance and render counts will be equal to the number of characters that are before the one you removedwe will tackle this waste a bit laternow lets look at the operations that occurredprint operationswhoaeighty four operations just to remove one element from the pagethat doesnt seem rightif we look more closely at the data given to uswe see that most of the operations were just text replacementswe didnt tell it to update any textdid wewellyeahwe didunintentionallybut we didreact uses the key prop as the way of identifying unique items in athe way react works is it renders the virtual representation of the doms essentially just a giant javascript objectwhenever something changesit rerenders it and compares the new one to the old onewhatever has changed gets updated in the real domif we remove an item in an array that is iterated over using the index of the array as the keythe next rerender of the dom will have decreased the index by one of everything above the one we removed[onetwothreefourfive] ^-- lets remove this one[]notice in thedrawingabovein the first arrayone is at index 0two is at index 1three is at index 2and so onthen when we removefrom the arraythe indexes of everything after two have shifted down by onein our applicationwhen we remove a squarethe index for each item after the one we removed is shifted down onewhich means so is its keywhen react compares the virtual dom representation of each item by comparing the ones with the same keyit will see that the text in it has changed and will therefore willreplace textin everyone of the items after the item we removedthen it will remove the last one since it thinks it was the one removedwe need to change our code somehow so that react will onlyremove childon the one we clicked on and will leave the others alonewe can accomplish this by giving each item a unique key that wont change between rendersthe best way to do this is to base the key on a piece of data that is displayed in the squaresince we have all of a characters data when we render a character components use a piece of data from each characters use its name since it wont change and we know it will be unique for each character in our arraychange the key prop like socharacter key={cname} character={c} style={getstyleswe have simply changed the key to instead be equal to cnamewhich will be the name of the character we are currently iterating overprofile the removing of a character and then look at the operationsyou will see that we have successfully fixed all theoperationsbut they have been replaced by some new onesimplementing shouldcomponentupdatethere are a bunch ofupdate stylesoperations that shouldnt be therewe want the default color to always be whitesince we are returning an empty string for most of themit is switching the background-color for them from an empty string to the stringwhiteor vice versawe dont want this to happenwe want the default to always be whites update getstyles in characterlist to look like the followingconst getstyles =index{ ifindex % 3 === 0{ return { backgroundcolorcolor }} return { backgroundcolornow profile the removal of aitem using the perfprofiler and output the operationsall of thoseoperations are goneand it took little work on our partnow print the wasted operations by pressingon the perfprofileroopslook at all those wasted virtual dom calculationswe need to make sure we are telling react to only re-render a character when the props passed to it have changedreact componentsby defaultare always re-renderedone of the many lifecycle methods react gives us is called shouldcomponentupdatethis is a method we implement on a component that is passed the new props and the new state that is created when something changes in our applicationif we return truethen it will re-render the functiona value of false will prevent itwe will use this to compare the new props coming in to the props we currently havewith this saids implement shouldcomponentupdate on our character componentshouldcomponentupdatenextprops{ const { characterstyleonclick } = thisreturn character== nextpropscharacterbackgroundcolor}we are saying that if the character names or style colors dont matchwe want to re-render the componentprofile this in the browseryou will see that we have no more wasted operationsthis solution doesnt extend wellif we add another prop or decide that the onclick handler may changethen we need to make sure we add the checking to our shouldcomponentupdate methodreact gives us a way to automatically do thatthis logic exists inside reactpurecomponentwe can extend our component from this component and get this functionality for freeusing reactpurecomponentthere is a catch with using this methodthoughreact will do the comparison for usbut it only does a shallow comparisonfor simple types like numbers and stringsthis is not a big dealthis becomes an issue when we are passing down objects or arraysdata changes inside objects or arrays wont be automatically picked up because the prop that is compared will be the reference to the object or arraya shallow comparison does not look at the data inside itheres a simple example in vanilla javascript to illustrate the issueconst obj1 = { namegeorgeconst obj2 = { nameconst obj3 = obj1obj1 === obj2 // falseobj2 === obj3 // falseobj1 === obj3 // truein our character componentwe are passing down three objectscharacter will never changewe know thatso we dont need to worry about changing the reference it passes downstyle is an object that has one attributeeach time react calculates the style propit is getting a new objectwhich means it will be passing down a new reference to an object that could potentially look exactly the sameif the background color stays whitethereforewe can move backgroundcolor out of this object and pass it down directly through props as a stringstrings are checked by valueso we are good therelastlythe onclick prop is calculated by creating a new closure that contains the current index of the item so we know which one to removethe issue is that each time we create a new closurewe are creating a new function and therefore a new referencethis is similar to the issue with the style propwe can fix this by passing an onclick prop that doesnthis new onclick will be called by the character component and will be passed the current characters namewe will have to change the logic in characterlistremovecharacter to work with thiss start with updating the character componentclass character extends reactpurecomponent { onclick ={ const { characteronclickbackgroundcolor } = thisconst style = { backgroundcolor }div classname={stylescharacter} style={style} onclick={thisonclick}>p>{charactername}</p>}}export default characterwe have changed the character class to instead extend reactwe added an onclick handler that is called by our componentinside this handlerwe call the onclick prop that was passed down from the characterlist and pass it the characterwe have updated our render method to work with the new form of our propsnoticethe onclick prop of the wrapping div is now equal to the internal onclicknot the one passed down through propsnow on to the characterlistimport character from/characterclass characterlist extends reactstate = { characterscharactersslice0} removecharacter = charactername =>{ const { characters } = thisconst characterindex = charactersfindindexc =>name === characternamesplicecharacterindex{ characters }const { color } = thischaracterlist}>character key={cname} character={c} backgroundcolor={i % 3 === 0} onclick={thisremovecharacter} />} <}}export default characterlistwe have removed the getstyles functionsince it is no longer neededalsowe updated removecharacter to be a simple method that takes a nameremoves the character with that name from the characterand then updates state with thatwe update our render function to pass down the correct props this time aroundnotice i have used a ternary operator to get the necessary value for the backgroundcolor propthis could be abstracted into its own functionbut i felt it was small and easy enough to understand to not warrant thatnow profile the removal of a characterwe have no more wasted operationsin my opinioni also think this code is cleaner and easier to understand to someone new to a codebasethat may just be me thoughusing immutable objectswe have done a ton of optimizing alreadybut there are still improvements to be madetry changing the color toredthen typein the input box againdont clickyetnexthitwe have changed the color of some of the squares to redand then changed them againif you look at the wasted operations that occurredyou will see two thingsthe characterlist and the perfprofilerthese both re-render because a piece of data stored in the highest parent changedit caused all the children of it to re-rendersince we never told react how to tell if each should be updatedit defaulted to re-rendering themsince we have implemented shallow checking of props and state in our character componentsthey did not re-render since their color didns start with the easy onethe perfprofilerit doesnt take any props and has only one thing in state it managesand it is just a booleanthis means we can use purecomponent to give us shallow checkingand we wont have to change the functionality at allpurecomponent { constructorall we changed here is reactcomponent to reactprofile the application like we did aboveand you will see that the profiler is no longer causing any wasted calculationss deal with the last piece of wasted calculationswe can do a similar thing with the characterlists make it extend from reactpurecomponent and see what happensif we profile the application againwe will see the characterlist is not re-calculatedwhich is exactly what we wantwe have broken our applicationif we try to remove a square by clicking on itnothing happenswhy is this happeningremember when i said that react does a shallow comparison of all props and pieces of state to determine if the component should updatein our removecharacter methodwe are using the splice method on the characters array to remove an item from itsplice changes an array in placethis means the array reference we are storing in state never changesreact doesnt know it needs to updateto get this working correctlywe need to make sure this reference changeswe need to make sure we treat the characters array as an immutable structurethis means we need to use operations on it that will always return a new arrayto do thiswe need to update our removecharacter methodremovecharacter = charactername =>{ const { characters } = this{ charactersfilter== charactername}here we are using the filter methodthis method returns a new array reference that points to an array that is exactly like the previous one but has certain items filtered outwe set this new reference to be the one we keep track of in statereact will see this change in the reference when it does its shallow checking and will know that the characterlist will need updatingthis will cause react to see that we removed one from theand it will then be removed from the actual domgo ahead and give it a trymake sure everything is workingprofile some stuff to make sure we have removed all wasted calculations and unnecessary operations on the domwrapping upso lets recapwe have learned how to profile our application using a component that wraps react-addons-perfwe figured out the best value to use for the key prop - a uniqueconsistent valuewe learned how react figures out what changes need to be made to the actual dom and how to tell react when it needs to recalculate when a component needs to be updated in the new virtual dom representationas you can seeoptimizing the work react has to do does not have to be hardusing the profilergoing back and optimizing the application is made simpleusing immutable data structures makes things even easierif you want an easy way to ensure all complex structures will be immutablei recommend looking into the immutablejs libraryi hope this has made more clear what react does under the hood and how to make things fasterlet me know what you think in the comments", "image" : "https://cdn.auth0.com/blog/optimizing-react/logo.png", "date" : "January 24, 2017" } , { "title" : "Building and Securing Koa and Angular 2 with JWT", "description" : "Single Page Applications (SPAs) can benefit greatly from JWT secured backends. Here we will see how to secure an Angular 2 app, backed by Koa, with JWTs.", "author_name" : "Bruno Krebs", "author_avatar" : "https://www.gravatar.com/avatar/76ea40cbf67675babe924eecf167b9b8?s=60", "author_url" : "https://twitter.com/brunoskrebs", "tags" : "angular2", "url" : "/building-and-securing-a-koa-and-angular2-app-with-jwt/", "keyword" : "tldrkoa is a web framework for nodejs that is based on generatorsa new es6 featureproviding a simpler and more concise apiin this articlewe will build a groceryapplicationwith an angular 2 front-endthat communicates with a koa based backendour application will take advantage of jwt tokens to secure these communicationsthe full implementation is provided on this repo at githuboverviewfor our applicationwe will use typescripta programming language that extends javascript with type checkingfor developing both our backend and frontendangular 2 already advises us to use typescript when writing applications with their frameworkbutbesides the advantage of using the same language on both endstypescript enables developers to become more productive by using tools that help them to avoid mistakeslike passing the wrong type to a methodand also by making refactoring much easiersince we want our application to secure ours usersdatawe will use jwt tokens to authorize certain requestsa jwt - which stands for json web token and is pronounced asjot- is a token that provides credibility on an end to end communicationjwts are getting widely adoptedand they take place as an alternative to therather oldcookies approachthe biggest advantage of jwts is that they can hold sensitive datain a readable formatand be trustworthy while getting sent over the networkkoa is web frameworkjust like expressthat is developed by the many of the same people that built express - by the wayhere is a nice tutorial on how to secure an angular 2 app backed by expressunofficially known as expresssuccessorkoa uses generators to improve readability and robustness of applicationswriting middlewares to handle users requests become very easy and clear with koas approachas we will see on our own groceryour application - grocery listthe groceryapplication will have a very simple and intuitive functionalityvisitorsunknown userswill be able to register themselves orif they already have registered beforeto sign in and manage their currentof items to buy at the grocery storea user wont be able to have more than onethe application will look like thisthe most important files of our source code will be divided in three foldersclient source folder - which will hold our angular 2 source codecommon source folder - which will hold files that are used by both backend and front-endserver source folder - which will contain all the code that is responsible for persisting usersdata and authenticating themcloning the repoto reach a minimum viable architecturewhere we can start developing the real code for our grocerywell need to do some configurationangular 2 alone is already considered cumbersome to configuresoto avoid wasting valuable timewe will use a repo that provides a very good starting pointcontaining many of the dependencies installed and configuredleaving us to deal with what matterskoas middlewaresangulars components and jwt tokensconfigurationthis repo was built specifically to be followed alongside with this postand can be found on githubso lets clone itgit clone git@githubcombrunokrebs/grocery-gitnodejs and npmnow that we have the repository clonedwe need to start configuring our development environmentthe most important piece of software that will enable us to use koa and angular 2 is nodejs and its package managernpmin order to continuewe shall first be sure that we have both of them installed with the right versionnode --versionthe above command must outputat leastv40since it is the first version that supports generatorsif an error occurswhile issuing this commandor a version prior to that gets printedplease refer to the download area of nodejs and install the latest versionhaving nodejs and npm correctly installedwe must issue npm install on our projects root folderthis command will installlocallyall the runtime and development dependencies that our application hasthis command may take a while to run since there are many dependencieswhen the installation finisheswe must issue a npm run dev to verify that indeed our project was correctly cloned and that the dependencies were installedthis command will bundle everything and start the koa server locally on port 3000s head to http//localhost3000/ and check that our backend is serving the indexhtml fileif it isyou will see a very simple page with the titlegroceryin the navigation bar at the topthis bar will also contain two labels calledsign inandsign upbut they wont do anything for the time beings backendeverything is now in order and we can begin talking about our backendthis will be a thin layer as it will have only three responsibilitiesfirst it will have to be able to register and retrieve userssecond it will have to authorizeor denyuser request and third it will have to be able to manage updates to users grocery liststo makes things easier we will map just two classes to represent our entitiesa user classto hold usersand a exception classto represent errors like unauthorized requestswe wont create a class to represent the groceryitself because this can be easily managed as a propertyan array of stringsin our usersto provide persistence and guarantee that our usersdata is availableall our data will be held by an in-memory database called lokijsthis database has the advantage of being really simple to setup and integrate with nodejsbut firstlets start by configuring koa and its middlewareskoakoajust like express middlewaresare functions that have access to three thingsthe request objectthat represents the request sent by the userthe response objectthat represents what will be sent to the userand the next middleware in the stackmiddlewares can execute any code to change the request and/or the response object and can decide if the next middleware will get executed or notour application will have three middlewares that will help us providing the functionality that we needthe first middleware is koa-bodyparserthis middleware is responsible for parsing the request sent by the user and it supports three types of contentjson objectsformsinputsselect boxes and so onand textwhenever we have this middleware configuredwe can access the data sent by the userthis is a must-have middlewareand as such our boilerplate repository already comes with it installed and configuredthe second middleware is called koa-staticas the name indicatesthis middleware enables us to serve static fileslike an imageor an html fileto our usersconsidering that we want our users to be fed with the indexhtml whenever they visit our websitewe need this middlewarethis is another must-have middlewareso the base project already have this one as wellthe third middleware will be responsible for sending errors formated as json objects to our usersalthough we could use an existing middlewarelike koa-errorwe will build this one from scratch to show how easy it isthe exception handler middlewarelets begin by creating an exception class to represent expected errors during runtimesince this class will be useful on our front-end as wellwe shall create itnaming as exceptiontsin a new folder called commonthe result must be a file on/src/common/exceptionts with the following contentexport class exception extends error { private _statuscodenumberconstructorstatuscodemessagestring{ superthis_statuscode = statuscode} get statuscodenumber { return this_statuscode} toobjectobject { return { statuscodemessage } }}as we can seewhat weve just created is responsible for carrying two properties related to errorsa status codethat represents an http statusand a message with the error descriptionin order to be able to warn our users properly - iein a json object format - about this errorswe have now to create an exception-handlermiddlewarets file that will contain ours middleware source codecreate it as a sibling to appunder/src/server/the contents of this middleware are very simpleimport {exception} from/common/exceptionexport default function *next{ try { yield next} catcherr{ iferr instanceof exception{ // it transform the exception to an object literal thisbody = errtoobjectstatus = err} else { // unknow error consolelogbody = { messageunexpected error}status = 500} }}reading this file from top to bottomwe can see that it first declares a dependency on the previously createdts file andafter thatexports a generatora function marked with *what this generator does is yield the control to the next middlewares in the stackto let them process the requestswhile keeping sure that if any errors occurs on them it gets catch and then informed to the user as an object literaldefining the exception class and the exception-handler middleware is not enoughwe also have to change/src/appts to make our koa server use these new resources//other imports// import the newly defined exception handler middlewareimport exceptionhandler from/exception-handlerpreviou configs and middlewares// make koa server use the middlewareserveruseexceptionhandlerserverrouterrouteslisten3000it is important to note that the exception-handler middleware must be configured before the routerdefining it like that enables our handler to call the routers that we will define laterand handle any exceptions and errors that might occur on themusers class and lokijs databasewe have now a backend application that is capable of serving static filesparsing user requests and handling exceptionss start creating the representation of our users and then create a class that will help us persist these usersour user class implementation is straightforwardcreate a file called userts in the same folder of exceptionwhich is/src/common/and then add the following codeexport class user { public emailpublic passwordpublic namepublic tokenpublic itemsarray<string>public static onserializedinstancejsonanyvoid { delete jsonpassworddelete jsonmeta}}besides the five properties defined in this classthere is one fancy elementthe static onserialized methodthat worths a mentionto avoid sending over the wire the users password and a property called metathat only lokijs cares aboutwe will use a serialize function that is part of a package entitled cerializewhenever we use this serialize functionthis onserialized gets called and removes the mentioned propertiessending only the data that we want to sendnowthe user database helperwhich we will create as a new file called userdaots on/src/server/user/will contain three methods that will allow us to persist and retrieve users and a method to bootstrap the database collectionimport { user } from//common/userclass userdao { private user_dblokicollection<{}>configuredbloki{ let instance = thisloaddatabase{}function{ instanceuser_db = dbgetcollectionifuser_db{ instanceaddcollection} }} insertuser{ thisinsert} findbyemailemailany { return thisfindone{ email }} updatevoid { let persisteduser = thisfindbyemailpersisteduseritems = useritemsupdate}}export const singletonuserdao = new userdaothe first method definedwill be later called with an instance of a lokijs databasethis instance will then be used to get the user collection orif it was not created yetcreate this collectionthe other three methods are pretty self-explanatorywe call insertuser to insert new usersfindbyemail to find an user by the e-mail address and update to make the users new data persistentlast thing that we have to doto have a user collection to hold some datais to setup lokijs within our backend and then configure the userdao class to use iton/src/server/apps make the following changesprevious imports// import lokijs and userdaoimport * as lokijs fromlokijsimport {singleton as userdao} from/user/userconst db = new lokijsgrocery-{ autosavetrue}userdaoeverything elsethis change is small and simplethere are only three steps that we need to followimport lokijs and userdaoconfigure a new instance of lokijsdefining a system file that will hold our data while our server is not upand then call configure method on our singleton userdaopassing to it the lokijs database instanceso far we have only called the method that configures userdaothe other onesthat enables us to retrieve and update usersare yet to be usednext we will create a route to allow a user to be updatedhandling user updateto be able to update our users grocerywe now have to create our first koas routethe project that weve cloned already contains one routedefined onthat serves the indexhtml file whenever an unknown url is calledwe do this to allow the angular 2 front-end applicationthat we are going to buildhandle routes by itselfcreating a koa route to handle updates on users is a straightforward processs create a file called user/src/server/user/ with the following code/userexport default { path/api/update-function *{ let user = userdaostateitems = thisrequestbodybody = {}}}everything that we do in this file is export a object literal with two propertiesthe first propertycalled pathrepresents the url that will be requested by external resourceslike our front-endto update the a users grocerythe second propertycalled middlewarecontains the generator that will handle such requeststhis generator acts as a last resource in a requestsince it does not call yield nextlike the last one that weve builtsimply retrieving an user objectbase on the users e-mailand then update this user with the new items sent within the thisnotice that we use two different objects to get data from the requestthe second objectcontains the new information sent by the user and it is under koa-bodyparsers responsibility to parse it properlythe first objectuser does not exist yetwe still have to find a way to be able to identify whose request we are handlingbut first we have to wire our new route to our applicationkoa router and fs importsimport update_list_route fromother routesrouterpostupdate_list_routepathexport default routerthe code snippet above has to be inserted onfirst we import our route andafter other routes already defined on this filewe register it on the main router objectconfiguring it to handle post requests for the defined pathauthentication and authorizationwe now have the endpoint that will handle updates on usersintegrated on the backend and ready to workbut we are still missing a way to authorize a user to issue such an update andmore than thatwe still dont have an endpoint to authenticate existing users and register new onesthese are the last features that we have to implement on our backends dig into themfirst thing that we will do is to create a file called authenticationts under/src/server/ folder with the following content// routesimport {singleton as userdao} fromimport { signverify } fromjsonwebtokenimport {serialize} fromcerializeconst super_secret =change-thisexport const sign_up = { path/api/sign-up{ throw new exception401e-mail already registered} userdaoinsertuseruser = userdaobody = { tokensignsuper_secretserialize}}export const sign_in = { path/api/sign-inuser &&password == user{ thisbody = { token} else { throw new exceptionuknown userexport const secured_routes = { path/^/api*${ try { let token = thisheaders[authorization]user = verifytokenreplacebeareryield nextthis new file exports three middlewares that will act like routess dive into the first two of themthe first one will respond to /api/sign-up post requests and it will enable new users to register to our applicationthe second one is going to be tied up to /api/sign-in in order to allowan user to use the applicationthis is done based on an e-mail and a password informed by the userboth middlewares described above respond to the user in the same wayif they are fed with proper data they send back a json response containing a token - issued by a function called sign of jsonwebtoken package - and the user datawhich contains its e-mail addressits name and itsin case the user sends improper datalike a wrong combination of e-mail and passwordor try to register with an e-mail that is already registeredboth middlewares answer with an exception describing the problemthe third middleware acts as a key component on our backendas we can seethe path that it will answer to is a regular expressionthis regular expression makes this middleware activate on any request sent to paths that begins with /api/and what it does is to check if a valid token is informed on the authorization header requestthis verification occurs with the help of verify function of the jsonwebtoken packagesign and verify functions work together to secure our userswhenever a request is sent to any of the protected endpointsverify takes control and ties a user object literalthat is retrieved from the tokento the thisuser referencethis token must be present and signed with the same secret ingredientwhich is done by the verify function when the user authenticated or registeredthat is how we guarantee that our user is who he claims to bea token data can be read by anyone anywherebut its content cannot be changed because if it isthe verify function will complain that it cannot assert this contentnow its time to register these new routes on our backend applicationto do soopen thets file and add the following codeprevious importsimport { sign_insign_upsecured_routes } from/authenticationload_html function declaration and middleware that uses itroutersign_insecured_routesit is important to notice that order matters when defining middlewares and routes on koasince each middleware/route has the option to yield the control to the next one on the stack or to stop right there and answer the userso we must define sign_in and sign_out before the secured_routes middleware andthe three of them must also be defined before update_list_routelike sowe finish writing our backendwhich has now every feature that we desire securedand can focus on the front-end development with angular 2angular 2 front-end - the final boundarythe repository that weve cloned already comes with a basic angular 2 application that is bootablebut it doesnt really do anything useful andas sowe must create the components that will allow our users to registerauthenticate and manage their liststhere are three components that we will have to create - signincomponentsignupcomponent and grocerylistcomponent - alongside with a routermodule to handle front-ends statesan authenticatedguard that will prevent unknown users to access protected areasan authenticationservice that will be used by the first two components and a globalerrorhandler that will catch any error and warn the usereven in a simple application as our grocerythere is a very good number of moving partss get to themhandling errors globallythe first moving part that we are going to attack is the globalerrorhandlerthis handler will enable us to warn the user when an expected error occurlike badly informed credentialscreating this handler is quite simplefirst we create a file called global-error-handler/src/client/app/ with the following contentimport { errorhandler } from@angular/coreimport { exception } fromexport class globalerrorhandler implements errorhandler { handleerrorerror{ let myerrorobjexception = errorrejectionalertmyerrorobjstatuscode ++ myerrorobj}}notice that our globalerrorhandler uses the common exception class that weve created beforethis is used to tightly type the error coming over the wire and then to be able to refer to the status code and error messagenow we have to register it on the main application modulethat is hold by the/src/client/app/appmodulets fileprevious importsimport {globalerrorhandler} from/global-error-handler@ngmodule{ //other statements providers[ { provideerrorhandleruseclassglobalerrorhandler } ]}export class appmodule { }registering it to become our errorhandler is easyas we can see in the snippet aboveit is just a matter of telling angular that whenever it needs errorhandlerit must actually use globalerrorhandler classauthentication service on the front-endafter configuring a way to handle errors on our front-end applicationit is time to create our first and only angular 2 servicethis service will be responsible for both authentication and sign upand it is going to be called authenticationservices create a file called authenticationservicein/src/client/app/ folderto handle these features for usimport {injectable} fromimport {http} from@angular/httpimportrxjs/add/operator/topromiseimport {user} fromimport {router} from@angular/router@injectableexport class authenticationservice { private _userprivate httphttpprivate router{ } private onauthenticatedresponsevoid { this_user = responselocalstoragesetitemid_tokennavigate[/grocery-} authenticatepromise<void>{ return this{ emailpassword }topromisethenresponse =>onauthenticatedcall} signup} useruser { return this_user}}this service only has three public methodsone to authenticate an user based on an e-mail and a password that are passed as parametersanother one to allow a new user to signup to our applicationand third method that returns the user authenticatedor nullboth authenticate and signup methods have a very similar behaviorthe first one makes a callthrough the angular 2 http componentto /api/sign-in endpoint passing an object literal containing the users e-mail and passwordif the backend sends back a successful responsethen authenticate method handles this response to the private method called onauthenticatedthis private method then takes three stepsit takes the users data sent back and keeps it on the service memoryunder the _user propertyit gets the jwt token sent by the server and registers it on the localstorage under the id_token keysends the user to the /grocery-angular 2 routeyet to be definedthe signup method proceeds almost in the exactly same waythe difference is that it sends the users namein parallel with their e-mail and passwordand the endpoint also changes to /api/sign-upif the request gets a successful responsethis methods proceeds exactly the same way as the authenticate methodpassing the response to onauthenticated private methodalsoit is important to notice that if any error is sent back from the serverthen globalerrorhandler previously defined handles the error by showing it to the user through an alertlastlys not forget to register this service within our appmodulethat is located atsimple taskjust add it as a provider below our globalerrorhandlerother importsimport {authenticationservice} fromprevious definitions providersglobalerrorhandler }authenticationservice ]}export class appmodule { }the difference between both declarations occur due to the fact that globalerrorhandler works as a substitute to the default errorhandlerwhile authenticationservice does not substitute anythingit will get injected on the components that we will createsign up component definitionnow we will start working on the three components that compose our applicationthe first that we will investigate is the signupcomponentits definition is very easyas the heavy work is done by the authenticationservicecreate a new file called sign-upcomponentts in a new folder called sign-up under/src/client/app/ and paste the following codeimport {component} fromimport {authenticationservice} from@component{ selectorsign-uptemplateurl/sign-uphtmlexport class signupcomponent { useruser = new userprivate authenticationserviceauthenticationservice{ } signupsignup}}we will also need to create the sign-uphtml that is referenced by this componentcreate it right next to the component definition and paste the following html code<div class="row">col-xs-12 col-sm-8 col-sm-offset-2 col-md-6 col-md-offset-3"jumbotron"h2>fill your data/h2>formngsubmit=""form-group"label for="name"full name</label>input type="text"ngmodel]="id="name="/div>email"e-mail address<password"password<button type="submit"class="btn btn-primary"/button>/form>this componentand its templateworks in a simple wayit just define a dependency to authenticationservicea property of the type user and a method signupwhenever our users reach this componentthey will be facing a form with three input textsone to fulfill with the users nameanother one to fulfill with the users e-mail addressand the last one with to fill with the desired passwordafter filling these fieldsthe user will have to submit the form by clicking the sign upbuttonwhich will trigger the signup method that we have just definedthis signup method only has one responsibilityto pass the users data to authenticationservicesignup methodsign in component definitionthe next component that we are going to define is signincomponent and it is extremely similar to the signupcomponentthis component will also have its own folders create a sign-in under/src/client/app/ and add a file called sign-ints to it with the code belowsign-in/sign-inexport class signincomponent { email{ } signinauthenticate}}similarly to signupcomponentwe also have to define a template to be used by this componentso we must add a file called sign-inhtml besides the previous file and add the following html templateenter your credentialssigninas we can see the only difference between both components is that the last one does not asks for the users name and calls authenticationservicesignin method instead of authenticationservicethese components could easily be merged into a single signinsignupcomponentbut lets follow the single-responsiblity principle and keep things separateauthhttp configurationbefore moving ahead to the grocerylistcomponent we must first configure a great component called authhttpthis component belongs to the angular2-jwt package and it makes communicating with secured backend very easysince we have already followed the convention stated on its documentation - iwe have saved ours usersjwt token under the id_token on the localstorage - it is just a matter of configuring our appmodule to use it as a provider and then defining this class as a dependency to whatever component needs itlike we are going to do with grocerylistcomponents open thets file and add this class as a providerprevious importsimport {auth_providers} fromangular2-jwtprevious configuration providers[ //previous providers auth_providers ]}export class appmodule { }grocerycomponent definitionnow we finally get to the real deal componentthe grocerylistcomponent is what the users are really interested inthis component will be responsible for three thingsletting the users see theirs current groceryletting the users add new itemsthrough a formto their listsand letting the user remove an item by tap/clicking itagaintwo files will be neededthe first one is the component definition with its methodsand the second one is the component html templates begin by creating a new folder called grocery-/src/client/app/then we must create a file called grocery-on this new folderwith the following contentimport {authhttp} frompanel-componentstyles[`jumbotron p { font-size1emjumbotron form { margin-bottom} `]}export class grocerylistcomponent { private updatelist =newitemprivate authhttpauthhttp{ } private updateuserslistupdatelistgetusersubscribedata =>newitem = nullerr =>console} getitems} additemvoid { ifnewitem &trim={ if{ thisitems = []} thispushupdateuserslist} } removeitemindexsplice1} private getuser}}this component definition is far bigger than the previous onesbut it is also easy to grasp its contentwhat it does is to provide three public methods that will be called by the templatethe first methodcalled getitem simply returns aof itemsthat belongs to the current logged-in useris then showedby the templateto the user so he can see what does he have to buythe second methodcalled additemis responsible for checking if the user has informed a new itemthrough the thisnewitem propertyand then to call the updateuserslist private methodthis method is also called by the next public method and what it does is to issue a post requestthrough the authhttp objectto update the usersthe third and final method is responsible for enabling the user to remove an itemthis method is linked to every single item of theon the templateand whenever one is tapped/clickedthis method is called with the index of the selected itemthen it simply calls the updateuserslist private methodupdating the useron the backendthis is your groceryp>use the form below to add an itemor click in one on theto remove it/p>additeminput-group"newitem"form-control"placeholder="new itemspan class="input-group-btn"button class="btn btn-default"type="add</span>-group"a class="-group-item"clickremoveitemi*ngfor="let item of getitemslet i = index"{{ item }} </a>the above html code is the template that the grocerylistcomponent expects to find on/src/client/app/grocery-s create this file and paste this snippetas we can see herethe template containsa form that enables the user to inform a new itemand aof items - which is populated by the *ngfor angular 2 directiveeach item on thisalso contains aevent defined to call the removeitem method on the componentdefining the routes and their guardwe are getting really close to having a fully functional grocerybut there are a few more steps that we have to takefirst we will need to define a guardcalled authenticatedguardthat will prevent the users to reach the grocerycomponent if they did not authenticatefor thats create a file called authenticatedguard/src/client/app/ with the code snippet belowimport {canactivateactivatedroutesnapshotrouterstatesnapshotrouter} fromimport {observable} fromrxjsexport class authenticatedguard implements canactivate { constructor{ } canactivaterouteobservable<boolean>boolean { if= null{ return true} thisreturn false}}this guard has as dependency on our authenticationservice and uses it whenever someone tries to activate a route that is wired to itthe canactivate method is what blocks or enables a user to move aheadif the user has not authenticated yethe or she is redirected to the root routewhich will show the sign-in formhaving defined the guard we are now able to define our routesthis will be done in a new file called approutingcreate this file under/src/client/app and add the following codeimport {routermodule} fromimport {signincomponent} from/sign-in/sign-inimport {signupcomponent} from/sign-up/sign-upimport {grocerylistcomponent} fromimport {authenticatedguard} from/authenticatedconst app_routes = [ { pathsignincomponent }{ pathsignupcomponent }grocerylistcomponentcanactivate[authenticatedguard] }]export const routing = routermoduleforrootapp_routesthis code creates a constant array called app_routes that contains all the three routes that our application havethey arethe root route - identified by an empty string- is tied up with the signincomponent andas suchshows the sign-in form whenever reachedthis will be available at the root path of our domaing3000/ or http//wwwmydomaincom/the sign-up routewhich will be shown when a user navigate to /sign-up pathis wired to signupcomponent and enables users to register to our applicationthe grocery-route that presents theof items to be managed by the userthis routewhich has as path /grocery-is the only route that is guarded by the authenticatedguardwhenever a user tries to activate itthe guardian verifies if the user is authenticated and then it decides whether the can navigate to it or nothaving ours routes defined we must now update the main templatewhich is located atadding these routes to the hyperlinks on itopen this file and update it as followsnav class="navbar navbar-default"container"-- brand and toggle get grouped for better mobile display -->navbar-header"navbar-brand"[routerlink]="['']"ul class="nav navbar-nav pull-right"li>a [routerlink]="sign in</li>sign-up'sign up</ul>/nav>section>router-outlet>/router-outlet>/section>notice that we must add a router-outlet tag to the section tagthis is where our components will get renderedwiring everything up in the appmodulewe have now every single piece of the front-end definedbefore being able to execute our application completely for the first timewe have one last stepangular 2 forces us to define every single component used in a module in the module definitionin our case in the appmodulefortunately this is a trivial taskopen the modules file and update it as followsprevious importsimport {authenticatedguard} fromimport {routing} from/app{ bootstrap[ appcomponent ]declarations[ appcomponentsignincomponentsignupcomponentgrocerylistcomponent ]imports[ browsermodulehttpmoduleformsmodulerouting ]providersauthenticatedguardauth_providers ]}export class appmodule { }note that we have added five imports - being three componentsthe routing declaration and the authenticatedguard - and that the imported elements must be properly set in the correct placethat isthe three components must be added in the declarations arraythe routing must be add to the imports array and the authenticatedguard must be added to the providers arrayand then finally we have reached the point where we have a fully functional groceryto see it working just issue npm run dev in the root folder of our project andwhen it finishes the building processhead to http3000/ and play with our appasidesoftening the authentication burden with auth0creating the authentication mechanisnm was not the hardest task butfor every single application that we buildwe will have to recreate it or reuse existing componentsone for the front-end applicationthat will show sign in and sign up formsand one to handle identity persistence and retrievalfurther moreif we want to support identity providers like googlefacebookgithubetcthen our task will start to become harderbut fear notauth0 is here to make our lives easier and securerconfiguring your auth0 clientfirst thing well need to do is sign up for a free auth0 account and configure a new clientwhen we first reach auth0s dashboardwe are asked what identity providers we want to usesince our application is intended for end userswe can choose only googlewhich shall cover many of the users aroundthose that are not covered can still input an e-mail address and a password to sign upafter that we must go to clients and create a new one choosingsingle page web applicationas the client types name it as something liketo help us remember what it is aboutnow that we have our client createdwe need take note of three propertiesdomainclient id and client secretthe first two properties will be used to configure auth0s front-end component and the third one will be used to validate the jwt token sent by auth0all of them can be found on the settings tab of the client that weve just createdthe last configuration that we need to dobefore updating our codeis to add http3000 as an allowed callback urls on our auth0 clientupdating the backends source codesince we wont handle sign in and sign up features by ourselves anymorethe first file that we will update is the src/server/authenticationin it we will make three changesremove the sign_up constantremove the sign_in constantand replace the super_secret constant with the client secret that weve copied from auth0after making these changesour file shall look like thissome-code-copied-from-auth0another file that will need to be updated is src/server/user/userbefore using auth0our users were registered in our application with the sign_up middleware that we have removednow we will have to register users in our database on the first time they use our applicationbesides thatwhenever a user signed inour backend used to send theof items with the sign in responseso we will also need a way to send the groceryboth situations will be handled by a new middleware that we will createthe final version of the file will look like thisexport const update_list = { pathexport const get_list = { path/api/{ // new users must be persisted before being able to fill data user = { email[] }body = userwe had to change the file to stop exporting a single literal object and start exporting two constantsupdate_list which already existed before but didnt have a nameand get_listwhich is responsible for persisting new users and for sending back their grocery liststypescript will now be complaining that we are trying to insert a user without a passworda name and a tokenwe dont need to handle this information anymoreauth0 handles it for us and we can trust that whatever is sent by auth0 is truethrough jwt verificationtherefore we have to update src/common/userts to be the following}}now that we have added the new route to allow users to retrieve their grocery listswe need to update the main file that handles routess open src/server/appts and change it to become the followingimport * as router fromkoa-routerimport * as fs fromfs// updating import from userroutesimport { update_listget_list } from// removing deprecated routesimport { secured_routes } fromconst router = new routerconst load_html = function{ return new promiseresolvereject{ fsreadfile/dev/client/index{encodingutf8return rejectget{ ifurlstartswith{ yield next} else { thisbody = yield load_html// securing any path that isfrom now onrouterupdate_list// adding the newendpointrouterget_listthere were just a few modifications that we had to dothe following is a summary of what we have changed in this filewe changed how we imported from usersince we now have two constants being exported and not a single default onewe removed the sign_in and sign_up routes that we were importing from authenticationwe removed the two lines that were registering the sign_in and sign_up middlewares to routerwe secured the middlewares registered to routerwe have registered the new middleware that enables users to retrieve their listswith that we end the modifications needed on our backends focus now on our front-endupdating the front-ends source codenow that our backend is ready to handle jwt tokens sent by auth0 and that the parts that are not needed anymoresign in and sign up featureswere removedwe can configure our angular 2 application to use auth0s lock components start by removing the front-end components that we dont need anymoreto do that we remove the following two folderssrc/client/app/sign-in/ and src/client/app/sign-up/and then we remove their declarations from src/client/app/appmaking this file look likewe will also have to remove both components from src/client/app/appwe start by removing their imports from this file and then we remove them from declarations property of @ngmodule configuration objectlike illustrated below// remove only the following two lines from importsimport {signincomponent} from{ // previous config // leave only appcomponent and grocerylistcomponent declarations// remaining config}export class appmodule { }after thats install auth0-lock dependency to our application by issuing npm install --save auth0-lockthis is the component that we will use to enable users to register and sign in to our applicationto configure it open the src/client/app/authenticationts file and update it as followsconst auth0lock = requireauth0-lockdefaultconst auth0_client_id =some-client-id-provided-by-auth0const auth0_domain =brunokrebsauth0private lock = new auth0lockauth0_client_idauth0_domain{ auth{ params{ scopeopenid email} } }{ // well listen for an authentication event to be raised and if successful will log the user inlockauthenticatedauthresult=>} private onauthenticatedvoid { localstorageidtokengetprofileprofile{ console_user = profilenavigatebyurlhide} showsigninscreenshow}}a few changes were made to this filefirst we imported auth0lockthen we added a property called lock and pointed it to a new instance of auth0lockthis new instance was configured with the client id and the domain that we have copied from auth0after creating auth0lock instancewe registered an authenticated event listener to handle the response sent by auth0this listener is responsible for saving the jwt token in the localstorageto retrieve the users profile and to send them to their grocerywe have also added a new showsigninscreen method that is responsible for opening the sign in/up screen from lockthis method will be used laterwe must now update the grocerycomponent to use the new route that we have created on our backendwe will make three changes to this componentwe will make it implements oninit lifecycle hookwhich will trigger the ajax request to the newly created route that responds with the users grocerywe will add a new private property called getlist that represents the path to this new routewe will implement a private method loadlistthat issues the http request to this new route and returns an observable that is consumed by ngoninit methodthis three changes will result in the following codeimport {componentoninit} fromother importsimport {response} fromcomponent declaration}export class grocerylistcomponent implements oninit { private updatelist =private getlist ={ } ngoninitloadlistitems =>items = items} private loadliststring[]>getlistmaprescatchobservablethrowserver error} //other methodsgetitemsremoveitem and getuser}having updated the grocerylistcomponent we are almost donethe last thing that we have to do is to make appcomponent use the new showsigninscreen method that we have created on authenticationservices open the src/client/app/appviewencapsulation} fromcomponent definition}export class appcomponent { title =showsigninscreen}}now we just need to update the src/client/app/apphtml file by removing the sign up linkthat is leading to a component that we already removedand updating the sign in link to call this signin method that we have created on appcomponentwhich will make our file end up like thisnav class=navbar navbar-defaultdiv class=containernavbar-headera class=navbar-brand[routerlink]=ul class=nav navbar-nav pull-rightawe are now ready to run our groceryapplication with auth0 identity managementby issuing npm run dev command we shall be able to use itaccessing it on http3000/and sign in with google or any other e-mail address as beforenow if you want to add another identity providerlike twitteryou just have to go to auth0s dashboard and configure itno changes to the source code are neededsweetrightconclusionas we could seewriting koa web servers is very easy and we achieve very clean code through the use of generatorsit is almost as if we were reading code that runs entirely synchronousby using typescript on both the backend and the front-endour code becomes more readable and reliabledue to the type safe approach of this programming languageallied to thatwe can see that although these technologies are relatively newwe already have support to a lot of thingslike securing the communication with jwt tokens", "image" : "https://cdn.auth0.com/blog/koa-angular2/logo.png", "date" : "January 19, 2017" } , { "title" : "How to create an application in Kotlin and secure it using JSON Web Tokens (JWTs)", "description" : "Learn how to create a simple application using Kotlin, a statically typed programming language that targets the Java Virtual Machine (JVM)", "author_name" : "Sathyaish Chakravarthy", "author_avatar" : "https://cdn.auth0.com/blog/create-kotlin-app/avatar.png", "author_url" : "http://twitter.com/Sathyaish", "tags" : "kotlin", "url" : "/how-to-create-a-kotlin-app-and-secure-it-using-jwt/", "keyword" : "tldr in this articlewell learn how to create a simple application using kotlina statically typed programming language that targets the java virtual machinejvmll secure all communication with our application using json web tokensjwtsin this articlejwtsdont worry if you dont know what a json web tokenisill cover that in a bitand you dont need to know any kotlineitherif youve got some decent programming experience with any programming languageyoull be able to follow through without any difficultyre a java programmerthoughll feel right at home because kotlin uses the java api to do everythingits got a very sparse syntax with a lightweight standard libraryin factlet us cover all of the features of kotlin used in the code that goes with this articlea crash course in kotlinto declare a variable that can be read from and written touse the var keywordvar namestring = “joe bloggs”var age = 20// type inferred by the compilerage = 21// valid statement since the variable is writable alsosemi-colons as statement terminators are optionalbut its a good practice to have themanywayall throughout our codell use semi-colons to terminate statementsto declare a read-only variable that can only be initialized onceuse the val keywordval nameval age = 20age = 21// illegal statementcompiler errorthe variable is read-onlyto create a classclass student{ // this is a class that has one default parameterless constructor // the parenthesis after the class name is actually the constructor declaration for this class}class student { // if the class has just one default parameterless constructor// the parenthesis area optional}class student// if the class is emptythe curlies are optionalto create an object of the student class// kotlin does not have the new keyword// this creates a read-only / assign-once // variable of type studentval studentstudent = studentval student = student// type inferred// read-write variablevar studentnullable typeskotlin distinguishes between nullable and non-nullable typeseach typewhether a primitive or user definedhas both a nullable version and a non-nullable versionyou create a nullable version by appending asymbol after the type namevar ageint= 2// nullable integerage = null// validval joestudent= null// validvar lisastudent = null// illegalthe variable is not nullableto declare a class student with a read-onlynon-nullable property called name and a read-writenullable property called agestringin the above codename and age are propertieskotlin creates a getter for the name property and a getter and setter pair for the age propertyalsothe student class in the listing above gets a parameterized constructorto create an object of the student class and use it// creates a read-onlynullable variable of type studentval student= student“lisa”nullto create a class with optional parameters in its constructorvar genderstring = “male”// optional argument omittednullable studentval joe“joe bloggs”20// provided an explicit value for all // arguments including the optional argument// non-nullable studentval lisa“lisa hendricks”18“female”functions in kotlin can exist independently outside of any classto create a function that returns voidunit is a type that means voidfun printnameunit { if{ println}}the unit keyword is optionalthe same function could be re-written as the following{ if}}to create a class with instance methods{ fun displayname{ printlnthis}}to use itjoedisplaynamekotlin does not have static classesbecause functions can exist independent of classesyou just write your functions that you would have wanted to write in a static class in a separate file// filejustmyfunctionsktfun play{}fun stop{}fun singsong{}fun fastforwardframesboolean {}sometimesyou want to create a class with no methods just to hold data so you can serialize/deserialize it or just hold some data in it so as to pass that data across the boundaries of your applicationsuch classes are referred to by various namessuch as data transfer objectsdata objectsbeansplain old java objectspojosor plain old clr objectspocosin the case of c# andnetto create a class of that kindll simply add the keyword data before the class keyword like sodata class student{ // a class with one read-only// non-nullable property that has // only a getter for its name property}when you create a data classin addition to the getters and setters for propertieswhich kotlin creates even for classes that are not marked as data classeskotlin generates the following methods for the data class behind the sceneshashcodetostringequalscopyannotationsyou can annotate a methodconstructoror class like in any other language@annotationforclass class @annotationforconstructor student@propertyannotation val name{ @methodannotation fun display{ // }}all classes and methods in kotlin are non-inheritable and non-overridablerespectivelyunless otherwise explicitly stated by declaring them with the open keywordclass thisclasscannotbeinheritedopen class thisclasscanopen class thisclasscanalsobeinherited { fun thismethodcannotbeoverriden{ } open fun thismethodcan{ }}class childclassthisclasscanalsobeinherited { override fun thismethodcan{ }}interfaces and class inheritanceinterface iperson {}open class personiperson {}interface iloggable {}open class studentpersonipersoniloggable {}in kotlinpackages and visibility modifiers work exactly like they do in javato create a singleton objectobject iamasingletonobject { val firstproperty}the above construct is called an object declarationan object declaration is an object instance that does not belong to a classand since thats the only instance you can have of itit is effectively a singleton objectthereforeyou use an object declaration when you need to create a singleton instancewe use the object declaration like soiamasingletonobjectfirstproperty = “helloworld”you can put it inside a classas wellin this casethe object will be able to access the internals of its containing classif you mark an object declaration with the companion keywordthe members of the companion object can be referenced directly as members of the containing class like soclass userval usernameval password{ companion object validator { public fun isvalidboolean { // access containing objects members ifusername“joe”{ // } } }}// usageval useruser = getuserifuserisvalid{}thats pretty much all you need to know to get started and be productive with kotlinwhat were going to developwell create a client/server application that gets from a web api aof book recommendations for a logged-in user based on the users interests or likesll store the users likes in a databasell write both the client and the web api in kotlinthe client will be a desktop application written using the swing/awt librariesthe server is an http servlet that returns data objects declared in a library named contracts as json stringsll call our application—in factll call this whole system—by the name bookyardheres what the high-level component architecture for bookyard would look likeworkflowassuming the servlet application is runningwhen the user launches the client applicationa login dialog will appeara successful login will dismiss the login dialog and display a window listing the recommended books for the logged-in userplease ignore the aesthetical anomalies of the graphical user interfaceoauth 20 and token /claims based authorizationin order to understand how well ensure secured communication between the client and the server of bookyardd like to provide a to-a-four-year-old explanation of some of the highfalutin terms popularly used in elite architect circlesconsider a traditional web application that resides on a single serverthats how it used to be done in the old days when the web was a new thing—you had all the source code on a single serveryou had two partiesa web server that had some server-side code that ran on the remote server and also some client-side code that ran on the browsersince both the client code and the server code were part of a single application usually written by a single developer or companythe server-side code and the client-side code could be considered a single entity or a single applicationthe user using the application in a web browserin those casesa simple user name and password-based authentication was sufficient to validate the identity of the userwhen the user logged inthe server would issue a session id and an authentication cookie to the users browserthe browser would carry these two with every subsequent request to the serverthis all worked fine until the number of users outgrew the servers capacity to handle requestsscenario 1a clustered environmentwhen you had two servers running the same application codeyou had a problemif the login request came to server awhich issued a session cookie and an authentication cookie to the userserver b didnt know anything about those cookiesany subsequent requests coming in to server b even after the user had innocently validated his identity earlier with server awould fail with server bone obvious solution to this problem is to make server a and server b share their session idsthis could be done by having an external state server that held the session state for the entire web application in an external data sourcesuch as a database or an in-memory state servera similar but simpler and more secure solutionhoweveris to have a separate authentication servereach request that comes to either of the servers—a or b—is validated for the presence of a special value in the request header—a value that could only have been obtained from the authentication serverif the value is presentthe servers a or b service the requestthe special value is missingthe client gets redirected to the authentication serverwhichafter logging the user inissues this special value that represents a successful login and an active sessionlets call this value returned by the authentication server an authentication tokenbelow is a diagrammatic representation of this simple sequence of three interactionsunder this regimewhen the user sends in a request to either of the serversa or beach of them checks to see whether the user has an authentication token or notif he doesntthey redirect his request to the authorization serverwhose duty is to ask the user for his user name and passwordauthenticate his identityand issue him an authentication token upon successful loginthe users request is then redirected automatically back to the original url he intended to get the data from-ieone of server a or server bthis timehis request carries with it the tokenso either of the servers fulfills his requestthis scheme of authentication and authorization is known as token-based authentication or token-based authorizationa series of steps performed in a sequenceas indicated abovemay also be called a workflowlet us name this particular workflow the simple authentication server workflowd like to confess that the names authentication token and simple authentication server workflow are names i have made upyou will not find them in security literaturebut in deliberately flying by the seat of my pants on good accordi am trying to avoid trespassing names that already occur in security literature with specific connotationsif i named this token an access tokenfor exampleor if i named the series of steps described above as authorization workflowd be trespassing a commonly accepted nomenclature that well make a nodding acquaintance with later in this articlethe above series of stepsthough potent as a basic building block for more specialized variantsis rather simplistic in that it does not describe the contents of the token and ways of securing it against theftin practicehow we name such a token is predicated on such puritanical considerationstoken uses and compositionin our simple examplethe client application is a web application that serves aof book recommendations for a user based on the users likesthe authentication server is a separate endpoint that could be a part of the same application or a different onethe simplicityis born of an assumption that both the authentication server and the resource servers are developed by the same vendorbecause both the authentication server and the resource servers are assumed to either be a part of the same web application or—at worst—be urls of two web applications developed by the same vendorthe use of such a token was both to authenticate a user and consequently to authorize him for access to the data held in the resource serversthe evolution of the web in recent times has opened up a slew of interesting possibilities that call for variations on the workflow described aboveauthenticationbig playerssuch as googleyahooand facebookto name a fewcommand large user bases of the total internet populationthis has encouraged users and web application developers to trust these big players to authenticate users for their identityconsequently freeing up web application developers to concentrate on developing just business logicdelegating the authentication of their users to these giantsimagine building a job search portalyou need to validate that the user is above 18 years of age and has a valid social security numberyou dont really care about any other information about the useryou could use the us government website to validate the user against these two parameters and receive a token containing identification information about the userthis specific need for authenticating a users identity dictates what the contents of the token will beauthorizationanother use that has come to light is the sharing of data from one web application to anotherimagine yourself developing a photo editing software for your usersinstead of having users upload pictures to your web serveryou could pull out their pictures from their flickr accountsedit them in your applicationand save them to the users dropbox or back to their flickr accountst care about the users identity as much as you care about their permission to use their flickr photographs and their dropbox accountboth the above uses—namelythe authentication and authorization of users—dictate the separation of the server granting the tokenthe role of such a tokenand consequently its contentsthough oauth 20 access tokens are opaque stringsthe authorization server mayupon requestattach additional information about a usersuch as his full nameemail addressorganizationdesignationand what have you into the token containersuch a workflow is illustrated by a variation named open id connectwhich builds on top of the oauth 20 frameworkthis token would then be called an id tokenthis would obviate the necessity for a database look-upif such information were to be required by either of the serversthey could simply read it from the id token itself without making a trip to the database servereach such optional datum attached to an access token is known as a claimas it establishes a claim upon the identity of the userfor this reasontoken-based authentication is also referred to as claims-based authenticationthe client or server may communicate using tokens even when their dialog does not pertain to authentication or authorizationwith each requestthe client may package the information it needs to send to the server in the form of a tokenalthough it wouldnt be called an access token in that casell observe later that the login dialog of bookyard client sends the users username and password in such a token when making a login request to the bookyard serverthat is an example usage of a token of this type but not for the purposes of behaving like an access tokenscenario 2the distributed web and oauth 20this mechanism of claims-based authorization described in the above paragraphs has opened up the web to new possibilitiesconsider a scenario where you needed to import your gmail contacts into linked in so you can invite them all to join your linked in networkuntil 2007you couldnt have done that without having your arm twistedthe only way to do that would have been for linked in to present you with a screen wherein you typed your gmail username and password into a linked in user interfaceeffectively giving linked in your gmail username and passwordwhat a shoddy life our younger selves livedthankfullya bunch of guys at twitter got together and said“that must change”they started by identifying that in a transaction of the kind described abovethere are three parties involveda resource servera server where the users data is keptthat would be gmail because your contacts would be kept therea user who owned the resources at gmailanda clienta third-party application that needed access to your data from the resource serverin other wordslinked inthe third partythat needed your gmailresource servercontactsthey wrote out a bunch of rules that both the resource servergmail in this exampleand the third-party applicationlinked in in this examplewould have to incorporate into their code in order to perform claims-based authorization so that you wouldnt have to give your gmail username and password to linked inthey called this grand scheme of interaction oauthit has since spread like wildfireoauth hassince its adventbeen revised twice as v1v1a and v20version 20 is the most recent and popular oneand the versions are not backward compatibleany reference to oauth in this article without an explicit version suffix must be understood to mean oauth 2todayvirtually every website—from github to gmailpicassa to flickrand perhaps even your own companys—has a resource server that exposes data in an oauth waythe oauth 20 specification also calls resource servers by the name oauth serversand the third-party clients by the name oauth clientsvirtually every userknowingly or notuses oauthwherever on the web you see buttons of the kind belowthat is oauth 20 in actionthe evolution of the web has enabled a scenario where the traditional web application could be written by an oauth providerthe client applicationas was the case with linked in in our example abovecould be written by someone elseand the user could be someone else0 access tokens are opaque and can be any string—even the string “hello” but such a value offers no securityan access token is a bit more useful than “hello” as it carries an expiry timestamp and may even be encrypted using symmetric or asymmetric encryptionwhat is a json web tokenthe access token is essentially a string sent in the header of the http response by the authorization server to the clientwith every subsequent requestthe client sends this string back to the server in one of three ways1as a part of the url in a get request2as the part of the body in a post requestor3the preferred waysent as part of the authorization http header in the following formauthorizationbearer <accesstoken>0 specification pussyfoots its way out of mandating a methoddeferring the choice to the authorization serverwhether a jwt is used for an access tokenwhich of the above three methods a client must adopt is dictated by the authorization server documentation0 extensions specifications relate to the choices of the access token structureclients are not free to choose any of the three at their dispositionnote the word bearer and also the moniker bearer token used to represent an access tokenthe moniker bearer token is righty appliedas the access token is a bearer instrumentjust like a tender bill in your pocket or a movie ticket you buythe access token doesnt have a way to attach the user with itonce you lose itanyone who has it may misuse it to represent themselves as youthereby stealing your identitythe best practice is to obscure the access tokenfor additional securityyou may encrypt itif you were to write a client that had to first decide how to compose an access token stringthen program that same logic in the oauth server—which you didnt writeby the way—and then encrypt the access tokenohbut waitve got to decide the encryption algorithm and then a secret key with which to encrypt itand it doesnt stop hereve got to tell all of this to the server so it can write the back logic for all of this decryption using the same techniqueand then it has to again create a new access token to send you after a successful login—ohoh mythat would all add up to a gigantic nuisanceanother bunch of people who were interested in and following the development of oauth saw far into the future and were able to anticipate this painthey defined a bunch of formats that all oauth servers and clients could be free to choose from to create access tokensone such format is named json web tokensthe format lets you compose the access token as a json stringit has three partsa header that lets you specify that the string is a jwt and the signing algorithm chosen to sign the tokenif anya body containing the user claimsthis is also referred to as the payloada signaturethe signature is derived by first converting the header into a base64 urlthen converting the payload into base64 urlconcatenating the two base-64 encoded values with a period as a separator between the encoded values and using a secret key to sign the resultant stringthe snippet below illustrates the composition of a json web tokenheader{alghs256typ}payload{iss“issueroauth server name”subthis is the subject of communicationjoe bloggs}signaturehmacsha256base64urlencodeheader++ base64urlencodepayloadsecretwhen sending the jwtyou send in the header and payload parts encoded as base64 urlthen you add another period at the end of these two parts and append the signature derived from the algorithm above to the end of this stringan example jwt might look like thisnewlines added for readabilityeyjhbgcioijiuzi1niisinr5cci6ikpxvcj9eyjpc3mioijpqxv0acbtzxj2zxigtmftzsisinn1yii6ilroaxmgaxmgdghlihn1ymply3qgb2ygy29tbxvuawnhdglvbiisim5hbwuioijkb2huiervzsj9odvw2luxnbannnwpstpqsnyxngooun1h0penprvz2fithe benefits of using a jwt with claims-based authenticationas is obvious from the commentary aboveare the followingit works in a clustered environment as well as a single-server deploymentit works when the client and the authorization server are independent parties that are not necessarily provided by the same vendorit can be used to centralize and jettison out the authentication and authorization of a large systemit can be used even when each of the oauth serversresource serversor authorization servers are written using different technologiesone of your resource servers could be written using aspone could be written using phpand the authorization server could be written using pythonthere is no affinity between the client and the serverany server will fulfill a request as long as the request has the access tokenunless your session data is largethere is no need to maintain each session separatelythe expiry on the access token represents the sessionthe request doesnt need to have come to the same server before in order to preserve the session informationno session history need be created with each individual resource serversince the access token can be encrypted or signedit can be protected from man-in-the-middle attacksit is mandated that we perform token-based authorization on a secure channelsuch as ssl/tls/httpssecuring bookyard with json web tokenswhen the user clicks the login button on the login dialogthe client application composes a json web token containing the following claimsclaim name claim meaning claim value `iss` issuer of the json web tokensince the client is sending this new jwtit writes its own application id as the value of this claimthough were using a jwt to send this informationwe could have sent it as the body of a normal post requestsending this information encrypted within a jwt makes it more securethis is a use of a jwt that is not used as an access tokenan access token is granted by an authorization server to the clientthis is an example of using a jwt as a means to communicate generic information securely between two partiesthe application id of the client application`sub` the subject of the claimthis can be any mutually agreed-upon value between the client and the oauth serverin our examplethe server expects the value “loginrequest” for a login request coming from a client`username` the username of the user attempting to log intheres presently no way to create a new userand there exists just one user in the application at presentthe username of that user is sathyaish`password` the password of the user attempting to log inthe password of the only user of this application is **foobar** the code to send this information is in a class named apiauthenticationmanagerwhich resides in the client project in the package bookyardclientas shown by the code listing belowapiauthenticationmanager class package bookyardimport javautilhashmapimport comfasterxmljacksondatabindjavatypeobjectmapperimport iojsonwebtokensignaturealgorithmimport bookyardcontractsconstantsiauthenticationmanageroperationresultcoretypetypereferencepublic class apiauthenticationmanageriauthenticationmanager<string>{ override public fun authenticateuserpasswordappidappsecretoperationresult<{ try { val claimshashmap<any>= hashmap<claimsputloginrequest// make a jwt out of the claims // using the jjwt/jwtk library val jwtstring = jwtsbuildersetclaimssignwithcompact// make a post request sending the jwt in the request body val loginurl= constantsloginurlval body=appid=+ appid +&token=+ jwtval responsestring= webrequestpostbody// deserialize the response into an operationresult<val mapperobjectmapper = objectmapperval result= mapperreadvalue<responsestringobjecttypereference<{ }// return that to the caller return result} catchexexception{ return operationresult<falsemessage} }}the client application uses the open-source library jjwt/jwtk to make the json web tokenthe jwt is then signed with the application secretwhen an oauth client registers with an oauth serverit is granted an application id and an application secretthe database that the web api references has these values stored for each clientso that the web api can know which client sent this request and fetch its application secret from the database and then use that secret to decrypt the jwtthe client sends in the body of the post request its own application idin addition to the jwtthe server returns a json string that is the serialized form of a class named operationresult<the operationresult<t>class is declared in the contracts module as followsclasspackage bookyardannotationjsoncreatorjsonpropertydata class operationresult<@jsoncreator constructor@jsonpropertysuccessfulval successfulbooleanerrormessageval errormessagedataval data{}assuming that the servers root is at https//localhost8443the client sends this request to the following urlhttp post https8443/logina servlet named loginservlet is configured to accept the https requests from this routeserverloginservletpackage bookyardioioexceptiondateimport javaxservletservletexceptionwebservlethttphttpservlethttpservletrequesthttpservletresponseimport orgapachecommonslang3timedateutilsjws*open class loginservlet{}the servlet invalidates get requests on its endpoint by returning a 405bad request/method not allowed http status codethis is a security measure to ensure that the jwt and the appid are not sent as a part of the urlalthough there is nothing wrong with sending this information in the url from a security viewpointthe specification defining urls allows a permissible length of 4096 bytesso it is prudent that the server mandates that this information be sent only as an http post request{ override fun dogetrequestresponse{ val msgstring =http get method not supportedtry { responsesenderrorsc_method_not_allowedmsg} catch{ eprintstacktrace} }}the servlet overrides the dopost method and delegates it to an internal implementation method named dopostinternalin the event that the parameters received in the request are invalidan operationresult<denoting a failure and containing an appropriate error message is sent to the client{ override fun dopost{ thisdopostinternal} private fun dopostinternal{ try { var appid= requestgetparameterappid == nulllength == 0{ val result= operationresult<bad requestmissing appidval resultstring= mapperwritevalueasstringresultgetwriterappendresultstringreturn} // get the application secret for this appid from // the database val appsecret= getapplicationsecretappsecret == null= operationresult<server errorappsecret not setstring = mapper} // to be continued in the next code snippet … } catch{ ex= operationresult<string = mapper} }}the server then uses the jjwt/jwtk library to decrypt and parse the jwt receivedit validates that the request isindeeda login request by checking that the subjectclaim of the jwt has the value “loginrequest”{ private fun dopostinternal{ try { // parse the jwt in the request body val loginrequestjwtstring = requesttokenval jwsclaimsjws<claims>= jwtsparsersetsigningkeyparseclaimsjwsloginrequestjwtjwsclaims == nullinvalid requestbad request formatstring = mapper} else { val bodyclaims = jwsclaimsgetbodygetcontentequalsjwt_subject_login_request{ val result= operationresult<invalid subjectstring = mapper} } } catch{ } }}the login servlet then reads the user claims from jwt and makes a database look-up to authenticate the userensuring that the user also belongs to the said application with the specified appid received in the request{ try { if// get the user name and password from the jwt payload val usernamestring = body// authenticate the user in the databasemake sure // that a user for the specified username and password // exists and is a user of an application with the // specified appidand that the appid indeed has the specified // appsecretval operationresultofuseruser>= databaseauthenticationmanagerauthenticateuseroperationresultofusersuccessful == false= operationresult<string = mapper} val user= operationresultofuseruser == nullinvalid login} // to be continued in the next snippet } } catch{ } }}finallyif all adds upthe servlet constructs an access tokenputting in the user information and an expiry timestamp of one hour from the time the token was generatedthenit sends the access token as a serialized operationresult<{ try {val claims= hashmap<bookyard serveraccesstokenuserididfullnameemailapplicationtableidgeneratedtimestampval expirydatedate = dateutilsaddhours// make a jwt out of the username and password val accesstokensetexpirationexpirydate// save the token in the database val savedboolean = saveorupdateaccesstokensavedinternal server errorstring = mapper} val result= operationresult<truestring = mapper} }}the apiauthenticationmanager class at the client deserializes the json string response and gives it to its caller within the clientthe caller is the login dialogwhich checks to see if the response received is successfulmeaning that if the user is a valid userit unpacks the access token from the data property of the operationresult<object and creates a new window to display the book recommendationsto the book recommendations windows constructorit passes the access tokenthe book recommendation screen needs this access token to make subsequent requests to retrieve theof book recommendations from the serverit will need to send this access token with every request that it makesloginpanebtnloginactionlistenerbtnloginaddactionlisteneractionlistener { override fun actionperformedactionevent{ // send an authentication request to the server val authmgr= apiauthenticationmanager// get a deserialized operationresult<object val result= authmgr{ // if the user is goodwe close the // login dialog and load the new form containerdialogsetstatuslabelcolorblackcontainerdialogdispose// get the access token from the// property of the operationresult<object // we received from the server val accesstoken= result// open the book recommendations window // giving it the access token we received from // the serverit will need this access token to // make any subsequent requests to the serverval bookrecommendationsframejframe = bookrecommendationsframebookrecommendationsframesetsize500setvisible} else { // otherwisewe display the error message we // received from the api server containerdialogred} }}the book recommendations window makes an http post requestsending the access token in the http authorization header and the appid in the request bodyit sends this new request to the recommendations url of the web apithe recommendations url is at https8443/recommend and is attended to by a servlet named recommendservletwhich we willlater in this documentbookrecommendationsframepackage bookyardpublic class bookrecommendationsframevar accesstokenjframeprivate fun getbookrecommendationsbookrecommendations{ try { // get the recommendatations url to hit val recommendationsurlstring = constantsrecommendationsurl// construct the authorization header with // the bearer token / access token val authorizationheaderkeyval authorizationheadervaluebearer ${accesstoken}// put the authorization header in the request // headers map val headersmutablemap<= hashmap<headersauthorizationheaderkeyauthorizationheadervalue// put the appid in the body of the request val bodyappid=${this}// make a post request to the servers recommendations url // with the appid in the body and the jwt access token in the // authorization header of the request val responsestringsystemoutprintln// deserialize the response into an // operationresult<bookrecommendations>= mapper{ return result} else { printlnreturn null} } catch} }}from this point onwardsat the serveran authorization filter filters every request before it reaches any servlet or endpoint other than the /login endpointthe authorization filter checks for the presence of an access token in the authorization http headerparses itand validates the tokenif the token is validthe request is passed to the next filter in the chain of filters and subsequently to its ultimate destination servletif notthe filter returns an appropriate error response as an operationresult<authorizationfilterpackage bookyardpublic class authorizationfilterfilter { override public fun dofilterservletrequestservletresponsechainfilterchain{ val reqhttpservletrequest = request as httpservletrequestval pathstring = reqgetservletpathpath/login{ reqsetattributedofilter} else { ifreqgetmethod{ req} // just check for the presence of the accesstoken val bearercomponentgetheaderval bearerarray<= bearercomponentsplitval accesstokenstring = bearerarray[1]bearercomponent// access token can be decrypted using the appids appsecret val appid= reqval resphttpservletresponse = response as httpservletresponserespsc_bad_request} val appsecretinvalid appid} val user= thisgetuserfromaccesstokeninvalid access token} // that a row exists against the userid obtained from the // access token in the access token table and that the token hasnt expiredval validboolean = validateaccesstokenvalidexpired access token} req} }}the recommendations servlet embodied in the class recommendservlet does not need to validate the request for the presence of an access tokenit simply does what it is meant to do—return theof recommendations based on a userit does this by looking up the databaserecommendservletpackage bookyard@webservlet/recommendpublic class recommendservlet{ override protected fun dopost{ val usergetattributeas user{ // return an error } // get recommendations from the database // based on the userwhich are also // in the database val recommendations= getbookrecommendationsrecommendations == null{ printlnfailed to retrieve users book recommendations from the database= operationresult<string = mapper} val result= operationresult<recommendations}}database schemait would make sense to look at the database scheme nowmost of the column names are descriptiveso youll get what they meanll provide an explanation only where it is necessarytableuser column name meaning id primary key username passwordhash a hash of the users passwordfullname email tableapplication column name meaning id primary key name a user-friendly name for the oauth client applicationid a string representing the application id that is displayed to the client application administratorthis string is used as the `appid` during all communication between any oauth clients and this serverapplicationsecret jwts are signed with this symmetric keymembershipa relationship table that stores what user belongs to which application/third party/oauth clientcolumn name meaning id primary key userid foreign key for `[user][id]` username applicationtableid foreign key for `[application][id]` applicationid tablewhen a login request succeedsthe server generates a new access token for that request and creates a new entry in this table if one doesnt already exist for the application and user making the requestif an entry already existsthe server updates the entry in this table to reflect the new access token and the new expiry timethe update is necessaryotherwisewe will have stale/expired access tokens in this table and requests made from valid oauth clients after the expiry will fail[id]` applicationid accesstoken jwt string expirydate datetime2stored as absolute time but sent to the client in unix time—ithe number of milliseconds since 1st january 1970likeablerepresents things that can be liked—eg“programming” “java” “kotlin”column name meaning id primary key name tableuserlikeeach entry represents a relationship between a user and the thing he likes[id]` username likeableid foreign key for `[likeable][id]` tablebook column name meaning id primary key name title of the book author name of the author description amazonurl source codeyou can download the whole source code for this application from this github repositoryas a c# developer aiming to localize complexityafter learning some basic kotlin syntax and practicing iti wrote the application first in java and then translated each line to kotlinll find both the java and the c# versions in the bookyard repositoryto know more about the toolsetthe modules in the projectknown issuesand how to launch the applicationread the readmemd file in the bookyard repositoryfurther readingkotlin documentationbookyard source codebookyard readme filewhat is oauthoauth is about authorizationnot about authenticationoauth is delegated authorization0 authorization code flowdemo0 authorization code flowoauth 20 youtube playlistsummaryin this articlewe learned how to use the kotlin programming languagewhich is a statically typed programming language that targets the java virtual machinewe described the function of the bookyard applicationwe meandered about ways we could use token-based authentication and authorization to secure an applicationwe learned what oauth 20 iswhat json web tokensareand how we used jwts to secure the bookyard application", "image" : "https://cdn.auth0.com/blog/create-kotlin-app/logo.png", "date" : "January 18, 2017" } , { "title" : "The Auth0 Marketing Website Has Been Localized for the Japanese Market", "description" : "Auth0 makes it simple and easy to add authentication and authorization to any app. By localizing our marketing website, we hope to make it easier for developers and companies to implement and see the benefits of modern identity management.", "author_name" : "Martin Gontovnikas", "author_avatar" : "https://www.gravatar.com/avatar/df6c864847fba9687d962cb80b482764??s=60", "author_url" : "http://twitter.com/mgonto", "tags" : "auth0", "url" : "/auth0-japanese-localization/", "keyword" : "bellevuewa - konnichiwawe are pleased to announce that the auth0 marketing website has been localized for the japanese marketauth0 makes it simple and easy to add authentication and authorization to any app with its powerful api and sdks for most languages and frameworksauth0 has always championed and embraced open-source software and we hope that by localizing our vast knowledge of security best practices we can make it easier for developers and companiesregardless of industrylocationor languageto make the web a safer and more secure place for everyoneif you are visiting the auth0 website from a japanese localeyou will automatically see the website presented in japaneseif you wish to view the japanese version of the website but are outside of the japanese localeyou can view it by visiting https//auth0com/jpabout auth0auth0 provides frictionless authentication and authorization for developersthe company makes it easy for developers to implement even the most complex identity solutions for their webmobileand internal applicationsultimatelyauth0 allows developers to control how a persons identity is used with the goal of making the internet saferas of august2016auth0 has raised over $24m from trinity venturesbessemer venture partnersk9 venturessilicon valley bankfounders co-opportland seed fund and nxtp labsand the company is further financially backed with a credit line from silicon valley bankfor more information visit httpscom or follow @auth0 on twitter", "image" : "https://cdn.auth0.com/blog/auth0-japanese-localization/hero.png", "date" : "January 17, 2017" } , { "title" : "A Brief History of JavaScript", "description" : "We take a look at the evolution of JavaScript, arguably one of the most important languages of today, and tomorrow", "author_name" : "Sebastián Peyrott", "author_avatar" : "https://en.gravatar.com/userimage/92476393/001c9ddc5ceb9829b6aaf24f5d28502a.png?size=200", "author_url" : "https://twitter.com/speyrott?lang=en", "tags" : "javascript", "url" : "/a-brief-history-of-javascript/", "keyword" : "javascript is arguably one of the most important languages todaythe rise of the web has taken javascript places it was never conceived to bewe take a look at how javascript has evolved in its short historyand where it is headedread ontweet this it all began in the 90sit all happened in six months from may to december 1995netscape communications corporation had a strong presence in the young webits browsernetscape communicatorwas gaining traction as a competitor to ncsa mosaicthe first popular web browsernetscape was founded by the very same people that took part in the development of mosaic during the early 90sand nowwith money and independencethey had the necessary freedom to seek further ways to expand the weband that is precisely what gave birth to javascriptmarc andreessenfounder of netscape communications and part of the ex-mosaic teamhad the vision that the web needed a way to become more dynamicanimationsinteraction and other forms of small automation should be part of the web of the futureso the web needed a small scripting language that could interact with the domwhich was not set in stone as it is right nowbutand this was an important strategic call at the timethis scripting language should not be oriented to big-shot developers and people with experience in the software engineering side of thingsjava was on the rise as welland java applets were to be a reality soonso the scripting language for the web would need to cater to a different type of audiencedesignersindeedthe web was statichtml was still young and simple enough for non-developers to pick upso whatever was to be part of the browser to make the web more dynamic should be accessible to non-programmersand so the idea of mocha was bornmocha was to become a scripting language for the websimpledynamicand accessible to non-developersthis is when brendan eichfather of javascriptcame into the pictureeich was contracted by netscape communications to develop ascheme for the browserscheme is a lisp dialect andas suchcomes with very little syntactic weightit is dynamicpowerfuland functional in naturethe web needed something of the sorteasy to grasp syntacticallyto reduce verbosity and speed up developmentand powerfuleich saw a chance to work on something he liked and joined forcesat the moment there was a lot of pressure to come up with a working prototype as soon as possiblethe java languagenée oak at the timewas starting to get tractionsun microsystems was making a big push for it and netscape communications was about to close a deal with them to make java available in the browserso why mochathis was the early name for javascriptwhy create a whole new language when there was an alternativethe idea at the time was that java was not suited for the type of audience that would consume mochascriptersamateursjava was just too bigtoo enterprisy for the roleso the idea was to make java available for bigprofessionalcomponent writerswhile mocha would be used for small scripting tasksin other wordsmocha was meant to be the scripting companion for javain a way analogous to the relationship between c/c++ and visual basic on the windows platformat the moment this was all going onengineers at netscape started studying java in detailthey went so far as starting to develop their own java virtual machinethis vmhoweverwas quickly shot down on the grounds that it would never achieve perfect bug-for-bug compatibility with sunsa sound engineering call at the timethere was a lot of internal pressure to pick one language as soon as possiblepythontclscheme itself were all possible candidatesso eich had to work fasthe had two advantages over the alternativesfreedom to pick the right set of featuresand a direct line to those who made the callsunfortunatelyhe also had a big disadvantageno timelots of important decisions had to be made and very little time was available to make themjavascriptakmochawas born in this contextin a matter of weeks a working prototype was functionaland so it was integrated into netscape communicatorwhat was meant to be a scheme for the browser turned into something very differentthe pressure to close the deal with sun and make mocha a scripting companion to java forced eichs handa java-like syntax was requiredand familiar semantics for many common idioms was also adoptedso mocha was not like scheme at allit looked like a dynamic javabut underneath it was a very different beasta premature lovechild of scheme and selfwith java looksthe prototype of mocha was integrated into netscape communicator in may 1995in short timeit was renamed to livescriptat the momentthe wordlivewas convenient from a marketing point of viewin december 1995netscape communications and sun closed the dealmocha/livescript would be renamed javascriptand it would be presented as a scripting language for small client-side tasks in the browserwhile java would be promoted as a biggerprofessional tool to develop rich web componentsthis first version of javascript set in stone many of the traits the language is known for todayin particularits object-modeland its functional features were already present in this first versionit is hard to say what would have happened had eich failed to succeed in coming up with a working prototype in timeworking alternatives were not java-like at allschemewere very differentit would have been difficult for sun to accept a companion language to java that was so differentor that predated java itself in history and developmenton the other handjava was for a long time an important part of the webhad sun never been part of the equationnetscape could have exercised more freedom at picking a languagethis is truebut would netscape have opted to adopt an external solution when an internally controlled and developed was possiblewe will never knowdifferent implementationswhen sun and netscape closed the deal to change the name of mocha/livescript to javascript a big question was raisedwhat would happen to alternative implementationsalthough netscape was quickly becoming the preferred browser at the timeinternet explorer was also being developed by microsoftfrom the very first daysjavascript made such a considerable difference in user experience that competing browsers had no choice but to come up with a working solutiona working implementation of javascriptand for a very long timeweb standards were not strongso microsoft implemented their own version of javascriptcalled jscriptkeepingjavaoff the name avoided possible trademark issuesjscript was different in more than just nameslight differences in implementationin particular with regards to certain dom functionscaused ripples that would still be felt many years into the futurejavascript wars were fought in more fronts than just names and timelines and many of its quirks are just the wounds of these warsthe first version of jscript was included with internet explorer 30released in august 1996netscapes implementation of javascript also received an internal namethe version released with netscape navigator 20 was known as mochain the fall of 1996eich rewrote most of mocha into a cleaner implementation to pay off for the technical debt caused by rushing it out of the doorthis new version of netscapes javascript engine was called spidermonkeyspidermonkey is still the name of the javascript engine found in firefoxnetscape navigators grandsonfor several yearsjscript and spidermonkey were the premier javascript enginesthe features implemented by bothnot always compatiblewould define what would become of the web in the following yearsmajor design featuresalthough javascript was born in a hurryseveral powerful features were part of it from the beginningthese features would define javascript as a languageand would allow it to outgrow its walled garden in spite of its quirkswhether any existing language could be usedinstead of inventing a new onewas also not something i decidedthe diktat from upper engineering management was that the language must “look like java”that ruled out perland tclalong with schemelaterin 1996john ousterhout came by to pitch tk and lament the missed opportunity for tclim not proudbut im happy that i chose scheme-ish first-class functions and self-ishalbeit singularprototypes as the main ingredientsthe java influencesespecially y2k date bugs but also the primitive vsobject distinctionegstring vsstringwere unfortunate- brendan eichs blogpopularityjava-like syntaxalthough keeping the syntax close to java was not the original idea behind javascriptmarketing forces changed thatin retrospectivealthough a different syntax might have been more convenient for certain featuresit is undeniable that a familiar syntax has helper javascript gain ground easilycompare this java examplepublic class sample { public static void mainstring[] args{ systemoutprintlnhello worldtry { final missilesilo silo = new missilesilosiloweaponsmillaunchmissileargs[0]} catchexception e{ systemunexpected exception+ e} }}to thismodernjavascript exampleconsolelogtry { const silo = new missilesiloprocessargv[0]{ console}functions as first-class objectsin javascriptfunctions are simply one more object typethey can be passed around just like any other elementthey can be bound to variablesandin later version of javascriptthey can even be thrown as exceptionsthis feature is a probable result of the strong influence scheme had in javascript developmentvar myfunction = functionhello}otherfunctionmyfunctionproperty =1by making function first-class objectscertain functional programming patterns are possiblefor instancelater versions of javascript make use of certain functional patternsvar a = [123]foreachfunction}these patterns have been exploited to great success by many librariessuch as underscore and immutablejsprototype-based object modelalthought the prototype-based object model was popularized by javascriptit was first introduced in the self languageeich had a strong preference for this model and it is powerful enough to model the more traditional approach of simula-based languages such as java or c++in factclassesas implemented in later version of javascriptare nothing more than syntactic sugar on top of the prototype systemone of the design objectives of selfthe language that inspired javascripts prototypeswas to avoid the problems of simula-style objectsthe dichotomy between classes and instances was seen as the cause for many of the inherent problems in simulas approachit was argued that as classes provided a certain archetype for object instancesas the code evolved and grew biggerit was harder and harder to adapt those base classes to unexpected new requirementsby making instances the archetypes from which new objects could be constructedthis limitation was to be removedthus the concept of prototypesan instance that fills in the gaps of a new instance by providing its own behaviorif a prototype is deemed inapropiate for a new objectit can simply be cloned and modified without affecting all other child instancesthis is arguably harder to do in a class-based approachmodify base classesfunction vehiclemaxspeed{ thismaxspeed = maxspeed}vehicleprototypemaxspeed = function{ return this}function car{ vehiclecallthis}carprototype = new vehiclethe power of prototypes made javascript extremely flexiblesparking the development of many libraries with their own object modelsa popular library called stampit makes heavy use of the prototype system to extend and manipulate objects in ways that are not possible using a traditional class-based approachprototypes have made javascript appear deceptively simpleempowering library authorsa big quirkprimitives vs objectsperhaps one of the biggest mistakes in the hurried development of javascript was making certain objects that behave similarly have different typesthe type of a string literalis not identical to the type of the string objectnew stringthis sometimes enforces unnecesarry and confusing typechecks>typeof<typeof new stringobjectbut this was just the start in javascript historyits hurried development made certain design mistakes a possibility much too realthe advantages of having a language for the dynamic web could not be postponedand history took overthe rest is perversemerciless historyjs beat java on the clientrivaled only by flashwhich supports an offspring of jsactionscriptpopularitya trip down memory lanea look at netscape navigator 20 and 30the first public release of javascript was integrated in netscape navigator 2released in 1995thanks to the wonders of virtualization and abandonware websiteswe can revive those moments todaymany basic features of javascript were not working at the timeanonymous functions and prototype chainsthe two most powerful features were not working as they do todaystillthese features were alredy part of the design of the language and would be implemented correctly in the following yearsit should be noted that the javascript interpreter in this release was considered in alpha statefortunatelya year laternetscape navigator 3released in 1996was already making a big differencenote how the error gives us more information about what is going onthis lets us speculate the interpreter is treating the prototype property in a special wayso we attempt to replace the object with a basic object instance which we then modifyet voiláit workssomewhatat leastthe assignment inside the test function appears to do nothingclearlythere was a lot of work that needed to be donenonethelessjavascript in its state was usable for many tasks and its popularity continued growingfeatures such as regular expressionsjson and exceptions were still not availablejavascript would evolve tremendously the following yearsecmascriptjavascript as a standardthe first big change for javascript after its public release came in the form of ecma standardizationecma is an industry association formed in 1961 concerned solely with standardization of information and communications systemswork on the standard for javascript was started in november 1996the identification for the standard was ecma-262 and the committee in charge was tc-39by the timejavascript was already a popular element in many pagesthis press release from 1996 puts the number of javascript pages at 300000javascript and java are cornerstone technologies of the netscape one platform for developing internet and intranet applicationsin the short time since their introduction last yearthe new languages have seen rapid developer acceptance with more than 175000 java applets and more than 300000 javascript-enabled pages on the internet today according to wwwhotbotcom- netscape press releasestandardization was an important step for such a young languagebut a great call nonethelessit opened up javascript to a wider audienceand gave other potential implementors voice in the evolution of the languageit also served the purpose of keeping other implementors in checkback thenit was feared microsoft or others would stray too far from the default implementation and cause fragmentationfor trademark reasonsthe ecma committee was not able to use javascript as the namethe alternatives were not liked by many eitherso after some discussion it was decided that the language described by the standard would be called ecmascripttodayjavascript is just the commercial name for ecmascriptecmascript 1 &on the road to standardizationthe first ecmascript standard was based on the version of javascript released with netscape navigator 4 and still missed important features such as regular expressionsjsonexceptionsand important methods for builtin objectsit was working much better in the browserjavascript was becoming better and betterversion 1 was released in june 1997notice how our simple test of prototypes and functions now works correctlya lot of work had gone under the hood in netscape 4and javascript benefited tremendously from itour example now essentially runs identically to any current browserthis is a great state to be for its first release as a standardthe second version of the standardecmascript 2was released to fix inconsistencies between ecma and the iso standard for javascriptiso/iec 16262so no changes to the language were part of itit was released in june 1998an interesting quirk of this version of javascript is that errors that are not caught at compile timewhich are in general left as unspecifiedleave to the whim of the interpreter what to do about themthis is because exceptions were not part of the language yetecmascript 3the first big changeswork continued past ecmascript 2 and the first big changes to the language saw the lightthis version brought inregular expressionsthe do-while blockexceptions and the try/catch blocksmore built-in functions for strings and arraysformatting for numeric outputthe in and instanceof operatorsmuch better error handlingecmascript 3 was released in december 1999this version of ecmascript spread far and wideit was supported by all major browsers at the timeand continued to be supported many years latereven todaysome transpilers can target this version of ecmascript when producing outputthis made ecmascript 3 the baseline target for many librarieseven when later versions of the standard where releasedalthough javascript was more in use than everit was still primarily a client-side languagemany of its new features brought it closer to breaking out of that cagenetscape navigator 6released in november 2000 and a major change from past versionssupported ecmascript 3almost a year and a half laterfirefoxa lean browser based on the codebase for netscape navigatorwas released supporting ecmascript 3 as wellthese browsersalongside internet explorer continued pushing javascript growththe birth of ajaxajaxasynchronous javascript and xmlwas a technique that was born in the years of ecmascript 3although it was not part of the standardmicrosoft implemented certain extensions to javascript for its internet explorer 5 browserone of them was the xmlhttprequest functionin the form of the xmlhttp activex controlthis function allowed a browser to perform an asynchronous http request against a serverthus allowing pages to be updated on-the-flyalthough the term ajax was not coined until years laterthis technique was pretty much in placethe term ajax was coined by jesse james garrettco-founder of adaptive pathin this iconic blog postxmlhttprequest proved to be a success and years later was integrated into its separate standardas part of the whatwg and the w3c groupsthis evolution of featuresan implementor bringing something interesting to the language and implementing it in its browseris still the way javascript and associated web standards such as html and css continue to evolveat the timethere was much less communication between partieswhich resulted in delays and fragmentationto be fairjavascript development today is much more organizedwith procedures for presenting proposals by any interested partiesplaying with netscape navigator 6 this release supports exceptionsthe main showstopper previous versions suffered when trying to access googleincrediblytrying to access google in this version results in a viewableworking pagefor contrast we attempted to access google using netscape navigator 4and we got hit by the lack of exceptionsincomplete renderingand bad layoutthings were moving fast for the webeven back thenplaying with internet explorer 5 internet explorer 5 was capable of rendering the current version of google as wellit is well knownthere were many differences in the implementation of certain features between internet explorer and other browsersthese differences plagued the web for many yearsand were the source of frustration for web developers for a long timewho usually had to implement special cases for internet explorer usersto access the xmlhttprequest object in internet explorer 5 and 6it was necessary to resort to activexother browsers implemented it as a native objectvar xhr = new activexobjectmicrosoftxmlhttparguablyit was internet explorer 5 who brought the idea to the table firstit was not until version 7 that microsoft started to follow standards and consensus more closelysome outdated corporate sites still require old versions of internet explorer to run correctly1 and 4the years of struggleunfortunatelythe following years were not good for javascript developmentas soon as work on ecmascript 4 startedstrong differences in the committee started to appearthere was a group of people that thought javascript needed features to become a stronger language for large-scale application developmentthis group proposed many features that were big in scope and in changesothers thought this was not the appropriate course for javascriptthe lack of consensusand the complexity of some of the proposed featurespushed the release of ecmascript 4 further and further awaywork on ecmascript 4 had begun as soon as version 3 came out the door in 1999many interesting features were discussed internally at netscapeinterest in implementing them had dwindled and work on a new version of ecmascript stopped after a while in the year 2003an interim report was released and some implementorssuch as adobeand microsoftjscriptnetused it as basis for their enginesin 2005the impact of ajax and xmlhttprequest sparked again the interest in a new version of javascript and tc-39 resumed workyears passed and the set of features grew bigger and biggerat the peak of developmentecmascript 4 had features such asclassesinterfacesnamespacespackagesoptional type annotationsoptional static type checkingstructural typestype definitionsmultimethodsparameterized typesproper tail callsiteratorsgeneratorsinstrospectiontype discriminating exception handlersconstant bindingsproper block scopingdestructuringsuccint function expressionsarray comprehensionsthe ecmascript 4 draft describes this new version as intended for programming in the largeif you are already familiar with ecmascript 6/2015 you will notice that many features from ecmascript 4 were reintroduced in itthough flexible and formally powerfulthe abstraction facilities of es3 are often inadequate in practice for the development of large software systemsecmascript programs are becoming larger and more complex with the adoption of ajax programming on the web and the extensive use of ecmascript as an extension and scripting language in applicationsthe development of large programs can benefit substantially from facilities like static type checkingname hidingearly binding and other optimization hooksand direct support for object-oriented programmingall of which are absent from es3- ecmascript 4 draftan interesting piece of history is the following google docs spreadsheetwhich displays the state of implementation of several javascript engines and the discussion of the parties involved in thatthe committee that was developing ecmascript 4 was formed by adobemozillaoperain unofficial capacityyahoo entered the game as most of the standard and features were already decideddoug crockfordan influential javascript developerwas the person sent by yahoo for thishe voiced his concerns in strong opposition to many of the changes proposed for ecmascript 4he got strong support from the microsoft representativein the words of crockford himselfbut it turned out that the microsoft member had similar concerns — he also thought the language was getting too big and was out of controlhe had not said anything prior to my joining the group because he was concerned thatif microsoft tried to get in the way of this thingit would be accused of anti-competitive behaviorbased on microsofts past performancethere were maybe some good reasons for them to be concerned about that — and it turned outthose concerns were well-foundedbecause that happenedbut i convinced him that microsoft should do the right thingand to his credithe decided that he shouldand was able to convince microsoft that it shouldso microsoft changed their position on es4- douglas crockford — the state and future of javascriptwhat started as doubtssoon became a strong stance against javascriptmicrosoft refused to accept any part of ecmascript 4 and was ready to take every necessary action to stop the standard from getting approvedeven legal actionspeople in the committee managed to prevent a legal strugglethe lack of concensus effectively prevented ecmascript 4 from advancingsome of the people at microsoft wanted to play hardball on this thingthey wanted to start setting up paper trailsbeginning grievance procedureswanting to do these extra legal thingsi didnt want any part of thatmy disagreement with es4 was strictly technical and i wanted to keep it strictly technicalt want to make it nastier than it had to bei just wanted to try to figure out what the right thing to do wasso i managed to moderate it a little bitbut microsoft still took an extreme positionsaying that they refused to accept any part of es4so the thing got polarizedbut i think it was polarized as a consequence of the es4 team refusing to consider any other opinionsat that moment the committee was not in consensuswhich was a bad thing because a standards group needs to be in consensusa standard should not be controversial- douglas crockford — the state and future of javascriptcrockford pushed forward the idea of coming up with a simplerreduced set of features for the new standardsomething all could agree onno new syntaxonly practical improvements born out of the experience of using the languagethis proposal came to be known as ecmascript 3for a timeboth standards coexistedand two informal committees were set in placeecmascript 4was too complex to be finished in the face of discordance1 was much simplerin spite of the struggle at ecmawas completedthe end for ecmascript 4 came in the year 2008when eich sent an email with the executive summary of a meeting in oslo which detailed the way forward for ecmascript and the future of versions 3the conclusions from that meeting were tofocus work on es31 with full collaboration of all partiesand target two interoperable implementations by early next yearcollaborate on the next step beyond es3which will include syntactic extensions but which will be more modest than es4 in both semantic and syntactic innovationsome es4 proposals have been deemed unsound for the weband are off the table for goodpackagesnamespaces and early bindingthis conclusion is key to harmonyother goals and ideas from es4 are being rephrased to keep consensus in the committeethese include a notion of classes based on existing es3 concepts combined with proposed es31 extensionsall in allecmascript 4 took almost 8 years of development and was finally scrappeda hard lesson for all who were involvedharmonyappears in the conclusions abovethis was the name the project for future extensions for javascript receivedharmony would be the alternative that everyone could agree onafter the release of ecmascript 3in the form of version 5as well see belowecmascript harmony became the place were all new ideas for javascript would be discussedactionscriptactionscript was a programming language based on an early draft for ecmascript 4adobe implemented it as part of its flash suite of applications and was the sole scripting language supported by itthis made adobe take a strong stance in favor of ecmascript 4even going as far as releasing their engine as open-sourcetamarinin hopes of speeding ecmascript 4 adoptionan interesting take on the matter was exposed by mike chambersan adobe employeeactionscript 3 is not going awayand we are not removing anything from it based on the recent decisionswe will continue to track the ecmascript specificationsbut as we always havewe will innovate and push the web forward when possiblejust as we have done in the past- mike chambers blogit was the hope of actionscript developers that innovation in actionscript would drive features in ecmascriptunfortunately this was never the caseand what later came to ecmascript 2015 was in many ways incompatible with actionscriptsome saw this move as an attempt of microsoft to remain in control of the language and the implementationthe only viable engine for ecmascript 4 at the moment was tamarinso microsoftwho had 80% browser market share at the momentcould continue using its own engineand extensionswithout paying the cost of switching to a competitors alternative or taking time to implement everything in-houseothers simply say microsofts objections were merely technicallike those from yahoos engineat this point had many differences with other implementationssome have seen this as a way to remain covertly in control of the languageactionscript remains today the language for flashwhichwith the advent of html5 has slowly faded in popularityactionscript remains the closest look to what ecmascript 4 could have been if it had been implemented by popular javascript enginespackage { import flashdisplayspritepublic class myrectangle_v3 extends sprite { private var _outlineweightnumberprivate var _coloruintprivate var _xlocationintprivate var _ylocationprivate var _rectanglewidthprivate var _rectangleheightpublic function myrectangle_v3outlineweightcolorxlocationylocationrectanglewidthrectangleheight{ _outlineweight = outlineweight_color = color_xlocation = xlocation_ylocation = ylocation_rectanglewidth = rectanglewidth_rectangleheight = rectangleheight} public function drawvoid{ graphicslinestyle_outlineweightgraphicsbeginfill_colordrawrect_xlocation_ylocation_rectanglewidth_rectangleheightendfill} } } e4xwhat is e4xe4x was the name an extension for ecmascript receivedit was released during the years of ecmascript 4 development2004so the moniker e4x was adoptedits actual name is ecmascript for xmland was standardized as ecma-357e4x extends ecmascript to support native processing and parsing of xml contentxml is treated as a native data type in e4xit saw initial adoption by major javascript enginessuch as spidermonkeybut it was later dropped due to lack of useit was removed from firefox in version 21other than the number4in its namee4x has little to do with ecmascript 4a sample of what e4x used to bring to the tablevar sales = <sales vendor=johnitem type=peasprice=quantity=6/>carrot310chips5/sales>alertsalesitem@type ==@quantity@vendorfor eachvar price in sales@price{ alertprice}delete salesitem[0]item += <oranges@quantity = 4other data formatssuch as jsonhave gained wider acceptance in the javascript communityso e4x came and went without much adoecmascript 5the rebirth of javascriptafter the long struggle of ecmascript 4from 2008 onwardsthe community focused on ecmascript 3ecmascript 4 was scrappedin the year 2009 ecmascript 31 was completed and signed-off by all involved partiesecmascript 4 was already recognized as a specific variant of ecmascript even without any proper releaseso the committee decided to rename ecmascript 31 to ecmascript 5 to avoid confusionecmascript 5 became one of the most supported versions of javascriptand also became the compiling target of many transpilersecmascript 5 was wholly supported by firefox 42011chrome 192012safari 6opera 12and internet explorer 10ecmascript 5 was a rather modest update to ecmascript 3it includedgetter/setterstrailing commas in array and object literalsreserved words as property namesnew object methodscreatedefinepropertykeyssealfreezegetownpropertynamesetcnew array methodsisarrayindexofeverysomemapfilterreducetrim and property accessnew date methodstoisostringnowtojsonfunction bindjsonimmutable global objectsundefinednaninfinitystrict modeother minor changesparseint ignores leading zeroesthown functions have proper this valuesnone of the changes required syntactic changesgetters and setters were already unofficially supported by various browsers at the timethe new object methods improveprogramming in the largeby giving programmers more tools to ensure certain invariants are enforcedcreatepropertystrict mode also became a strong tool in this area by preventing many common sources for errorsthe additional array methods improve certain functional patternsthe other big change is jsona javascript-inspired data format that is now natively supported through jsonstringify and jsonparseother changes make small improvements in several areas based on practical experienceall-in-allecmascript 5 was a modest improvement that helped javascript become a more usable languagefor both small scriptsand bigger projectsthere were many good ideas from ecmascript 4 that got scrapped and would see a return through the ecmascript harmony proposalecmascript 5 saw another iteration in the year 2011 in the form of ecmascript 5this release clarified some ambiguous points in the standard but didnt provide any new featuresall new features were slated for the next big release of ecmascriptecmascript 62015&72016a general purpose languagethe ecmascript harmony proposal became a hub for future improvements to javascriptmany ideas from ecmascript 4 were cancelled for goodbut others were rehashed with a new mindsetlater renamed to ecmascript 2015was slated to bring big changesalmost every change that required syntactic changes was pushed back to this versionthis timethe committee achieved unity and ecmascript 6 was finally released in the year 2015many browser venders were already working on implementing its featuresbut with a big changelog things took some timenot all browsers have complete coverage of ecmascript 2015although they are very closethe release of ecmascript 2015 caused a big jump in the use of transpilers such as babel or traceureven before the releaseas these transpilers tracked the progress of the technical committeepeople were already experiencing many of the benefits of ecmascript 2015some of the big features of ecmascript 4 were implemented in this version of ecmascriptthey were implemented with a different mindsetclasses in ecmascript 2015 are little more than syntactic sugar on top of prototypesthis mindset eases the transition and the development of new featureswe did an extensive overview of the new features of ecmascript 2015 in our a rundown of javascript 2015 features articleyou can also take a look at the ecmascript compatibility table to get a sense of were we stand right now in terms of implementationa short summary of the new features followsletlexicaland constunrebindablebindingsarrow functionsshorter anonymous functionsand lexical thisenclosing scope thissyntactic suger on top of prototypesobject literal improvementscomputed keysshorter method definitionstemplate stringspromisesgeneratorsiterablesiterators and forofdefault arguments for functions and the rest operatorspread syntaxdestructuringmodule syntaxnew collectionssetweaksetweakmapproxies and reflectionsymbolstyped arrayssupport for subclassing built-insguaranteed tail-call optimizationsimpler unicode supportbinary and octal literalsclassesconstpromisesgeneratorsiteratorsmodulesthese are all features meant to take javascript to a bigger audienceand to aid in programming in the largeit may come as a surprise that so many features could get past the standardization process when ecmascript 4 failedin this senseit is important to remark that many of the most invasive features of ecmascript 4 were not reconsiderednamespacesoptional typingwhile others were rethought in a way they could get past previous objectionsmaking classes syntactic sugar on top of prototypesecmascript 2015 was hard word and took almost 6 years to completeand more to fully implementthe fact that such an arduous task could be completed by the ecmascript technical committee was seen as a good sign of things to comea small revision to ecmascript was released in the year 2016this small revision was the consequence of a new release process implemented by tc-39all new proposals must go through a four stage processevery proposal that reaches stage 4 has a strong chance of getting included in the next version of ecmascriptthough the committee may still opt to push back its inclusionthis way proposals are developed almost on their ownthough interaction with other proposals must be taken into accountproposals do not stop the development of ecmascriptif a proposal is ready for inclusionand enough proposals have reached stage 4a new ecmascript version can be releasedthe version released in year 2016 was a rather small onethe exponentiation operator**arrayincludesa few minor correctionsgenerators cant be used with newcertain interesting proposals have already reached stage 4 in 2016so what lies ahead for ecmascriptthe future and beyondecmascipt 2017 and laterperhaps the most important stage 4 proposal currently in the works is async/awaitasync/await are a syntactic extension to javascript that make working with promises much more palatabletake the following ecmascript 2015 codefunction apidosomethingmorecomplexwiththis{ const urla =const urlb =httplibrequesturlathenresult =>{ const parsed = parseresultresultreturn new promiseresolvereject=>{ databaseupdateupdatestatementparsed{ resolveerror =>{ rejecterror{ return httpliburlb{ return workerprocessdata{ loggerinfo`apidosomethingmorecomplex success${result}`}and compare it to the following async/await enabled codeasync function apidosomethingmorecomplextry { let result = await httplibconst parsed = parseresultawait databaseresult = await httplibresult = await workerlogger}}other stage 4 proposals are minor in scopevalues and objectentriesstring paddingobjectgetownpropertydescriptorstrailing commas if function parametersthese proposals are all slated for release in the year 2017however the committee may choose to push them back at their discretionjust having async/await would be an exciting changebut the future does not end therewe can take a look at some of the other proposals to get a sense of what lies further aheadsome interesting ones aresimd apisasynchronous iterationasync/await + iterationgenerator arrow functions64-bit integer operationsrealmsstate separation/isolationshared memory and atomicsjavascript is looking more and more like a general purpose languagebut there is one more big thing in javascripts future that will make a big differencewebassemblyif you have not heard about webassemblyyou should read about itthe explosion of librariesframeworks and general development that was sparked since ecmascript 5 was released has made javascript an interesting target for other languagesfor big codebasesinteroperability is keytake games for instancethe lingua-franca for game development is still c++and it is portable to many architecturesporting a windows or console game to the browser was seen as an insurmountable taskthe incredible performance of current jit javascript virtual machines made this possiblethus things like emscriptena llvm-to-javascript compilerwere bornmozilla saw this and started working on making javascript a suitable target for compilersasmjs was bornjs is a strict subset of javascript that is ideal as a target for compilersjavascript virtual machines can be optimized to recognize this subset and produce even better code than is currently possible in normal javascript codethe browser is slowly becoming a whole new target for compiling appsand javascript is at the center of itthere are certain limitations that not even asmjs can resolveit would be necessary to make changes to javascript that have nothing to do with its purposeto make the web a proper target for other languages something different is neededand that is exactly what webassembly iswebassembly is a bytecode for the webany program with a suitable compiler can be compiled to webassembly and run on a suitable virtual machinejavascript virtual machines can provide the necessary semanticsthe first versions of webassembly aims at 1-on-1 compatibility with the asmjs specificationwebassembly not only brings the promise of faster load timesbytecode can be parsed faster than textbut possible optimizations not available at the moment in asmimagine a web of perfect interoperability between javascript and your existing codeat first sightthis might appear to compromise the growth of javascriptbut in fact it is quite the contraryby making it easier for other languages and frameworks to be interoperable with javascriptjavascript can continue its growth as a general purpose languageand webassembly is the necessary tool for thatdevelopment versions of chromefirefox and microsoft edge support a draft of the webassembly specification and are capable of running demo appsasidejavascript use at auth0at auth0 we are heavy users of javascriptfrom our lock library to our backendjavascript powers the core of our operationswe find its asynchronous nature and the low entry barrier for new developers essential to our successwe are eager to see where the language is headed and the impact it will have in its ecosystemsign up for a free auth0 account and take a first-hand look at a production ready ecosystem written in javascriptand dont worrywe have client libraries for all popular frameworks and platformsconclusionthe history of javascript has been long and full of bumpsit was proposed as ascheme for the webearly on it got java-like syntax strapped onits first prototype was developed in a matter of weeksit suffered the perils of marketing and got three names in less than two yearsit was then standardized and got a name that sounded like a skin diseaseafter three successful releasesthe fourth got caught up in development hell for almost 8 yearsfingers got pointed aroundby the sheer success of a single featureajaxthe community got its act back together and development was resumedversion 4 was scrapped and a minor revisionknown by everyone as version 3got renamed to version 5version 6 spent many years in developmentagainbut this time the committee succeededbut nonetheless decided to change the name againthis time to 2015this revision was big and took a lot of time to get implementedbut finallynew air was breathed into javascriptthe community is as active as evernodev8 and other projects have brought javascript to places it was never thought forwebassembly are about to take it even furtherand the active proposals in different stages are all making javascripts future as bright as everits been a long roadfull of bumpsand javascript is still one of the most successful languages everthats a testament in itselfalways bet on javascriptjavascript is still one of the most successful languages evertweet this", "image" : "https://cdn.auth0.com/blog/es6rundown/logo.png", "date" : "January 16, 2017" } , { "title" : "Building An Instagram Clone With GraphQL and Auth0", "description" : "Learn how authentication and authorization works with GraphQL and Auth0 by building an Instagram clone.", "author_name" : "Nilan Marktanner", "author_avatar" : "https://cdn.auth0.com/blog/graphworker/nilan.png", "author_url" : "https://twitter.com/_marktani", "tags" : "frameworks", "url" : "/building-an-instagram-clone-with-graphql-and-auth0/", "keyword" : "introduction to graphqlgraphql is a query language for apis created by facebook that offers declarative data fetching in the client and is already used by companies such as coursera and githuba graphql server exposes a schema that describes its api including queriesto fetch dataand mutationsto modify datathis allows clients to specify their data requirements with queries and send it to one graphql endpointinstead of collecting information from multiple endpoints as is typical with restwhile queries are a very easy and quick way for clients to get exactly the data they need in one requestthe graphql server has to parse and validate the querycheck which fields are included and return the underlying data from the databasethe type-safe schema unlocks new possibilities for toolingas demonstrated by graphiql which is maintained by facebookwith features like auto completionas shown in the gifand an included documentation it offers a great developer experienceautocompletionlets learn more about graphql queries and mutations by building an instagram clonebuilding an instagram cloneuser authentication with auth0 for react and apollowe want to build an application that displays a feed of posts with an appropriate image and descriptioneveryone should be able to see these postsbut to prevent spam we only allow registered users to create new oneswe also send occasional email updates to subscribed usersa simple type schema for our application might look like thistype user { idstringemailstring emailsubscriptionboolean namestring posts[post]}type post { iddescriptionstring imageurlstring authoruser}we have a user object type that consists of the fields emailsubscription of type booleanemail and name of type stringand posts oftype postdenoted by [post]the post object type consists of a description and an imageurl field of type string and an author field of type useradditionallyboth object types have the id field which is a required stringdenoted by stringthese types are then used by our graphql server to expose different queries and mutations in its graphql schematypicallyyou will see queries to fetch a specific nodesingle data itemsand queries to fetch multiple or even all nodes of a certain typefor mutationsthere are usually those for creatingupdating and deleting a node of a certain typein our caseone available query is allposts which we already saw abovequery { allposts { description imageurl }}when we send this query as a http request to our graphql serverwe get a json response with the same structure as the query{dataallposts[ {#auth0imageurlhttps//styleguideauth0com/lib/logos/img/logo-bluepng}#graphql//rawgithubusercontentcom/facebook/graphql/master/resources/graphql%20logo} ] }}in our frontend applicationwe can then use the allposts array in the response to display the postswith graphql querieswe can choose exactly what information we are interested infor exampleif we also want to display the author id and name for every postwe can simply include it in the queryquery { allposts { description imageurl author { id name } }}which will be reflected in the responseauthoridnilan-idnamenilan} }} } ] }}as graphql queries are hierarchicalwe simply include the author object with the desired fields in our querynote that we changed the query without even touching our graphql serveras for creating new postswe can use the createpost mutation exposed by our graphql serverherewe have to supply parameters that describe our new postif the user with id nilan-id wants to create a new postwe could send this mutationmutation { createpostfound this #auth0 badgecom/lib/logos/img/badgeauthorid{ id } }mutations have responses tooin this case we get the id of the new post in returncreatepostanother-id} } }now that we saw queries and mutations in actionwe can think about authentication and authorizationauthenticating graphql requeststhe authentication workflow we are focusing on goes like thisa user is signing in with auth0 lockobtaining a signed json web token that contains the auth0 user id associated with that uservia the authorization header of the user querythe token is sent to the server which validates if the token is correctly signed and if soalso checks if the embedded auth0 user id refers to a user already registered at the serverquery { user { name } }for a valid token that contains the auth0 user id of a user already registered at the graphql serverthe name will be returneduser} } }they are then logged into the application and can create new postsfor invalid tokens or valid tokens that contain the auth0 user id of a user not yet registered at the graphql serverthe response will be nullnull} }new users will then have to finish the sign up process on a separate pageonce the user enters his information and hits the sign up buttonwe can use the createuser mutation to register a new user at the graphql servermutation { createusernewuser@emailcomemailsubscriptiontruenew usertoken<jwt>{ name }}and obtain its namecreateuser} }}note that we also pass in the jwt obtained from auth0 lock to associate the new user with the auth0 user id embedded in the tokenif another token that includes the auth0 user id will be supplied in subsequent requests to the graphql serverthe request can be associated with the according userwe can simply continue to include the jwt in the authorization header after logging inif we use the user query nowquery { user { name }}we will obtain a valid response instead of null} }}we can use the user query in the frontend application to show buttons for logout and creating new postsan authenticated user can then create a new post by specifying a description and url for the image which will be used for the createpost that we saw aboveauthorization for graphqlone last thing that is missing is disallowing users that are not authenticated to create new postswe can partly control that by hiding functionality in the frontendhoweverit is really the responsibility of the graphql server to make sure that no unauthorized posts are createdwe can realize this on the server by defining a set of permission rulesthese are the permissions we need for our applicationeveryone can query postseveryone can query userseveryone can create usersauthenticated users can create postsadmins can delete postsadmins can delete usersoperations not listed are not allowedthen the graphql server can determine whether an incoming request is authorized by checking the permissionif the incoming request contains the deletepost mutationthe server would reject the deletion of the post as long as the request wasnt made by an adminon the other handif the request is authenticated and contains the createpost mutationthe server would grant the request permissiondue to the authenticated users can create posts ruleconclusionthats itin this article we learned the basics of graphql by building an instagram clonewe saw how to authenticate graphql requests using auth0 and combine that with permission rules on the graphql serverto see how the application looks likeyou can play around with the hosted version of our instagram cloneto setup a graphql backend in less than 5 minutescheck out graphcoolauth0 integration comes out-of-the-box and works nicely together with the advanced permission system", "image" : "https://cdn.auth0.com/blog/graphcool/logo.png", "date" : "January 12, 2017" } , { "title" : "Alternatives to Native Mobile App Development", "description" : "A look at five frameworks for building cross-platform mobile applications and how they stack up against each other.", "author_name" : "Ado Kukic", "author_avatar" : "https://s.gravatar.com/avatar/99c4080f412ccf46b9b564db7f482907?s=200", "author_url" : "https://twitter.com/kukicado", "tags" : "frameworks", "url" : "/alternatives-to-native-mobile-app-development/", "keyword" : "tldr mobile apps are here to stayfor a long timemobile app development required extensive objective-c or java knowledgehybrid frameworks and transpilers existedbut paled in comparison to what could be accomplished building apps nativelyin recent yearsthese frameworks have started catching up on featuresfunctionality and performancetodaywell take a look at five promising frameworks for building cross-platform mobile applicationstwo mobile platforms dominate the landscapeapples ios and googles androidcombinedthese two platforms make up 99% of all mobile devicesbetween the two platformsover 42 million mobile apps have been released in categories such as gamingeducationbusinessmusicand moreandroid is based on java while ios runs on objective-c and swifttwo fundamentally different frameworks for developers to targetcompanies wishing to develop mobile apps had to have two teamsone dedicated to ios development and the other to androidhybrid or cross-platform frameworks and transpilers have gained popularity as they allow developers to target multiple platforms with a single code basereducing cost and development timetoday we will take a look at alternatives to building native mobile applicationswe will look at various frameworks and approaches to bringing your app to the small screen and the pros and cons of eachwithout further adolets jump right inionicionic is perhaps the most widely known cross-platform mobile frameworkit allows developers to build ios and android applications with web technologies such as htmlcssand javascriptionic is built on top of cordova which enables access to various device features such as geolocationpush notificationscameraand othersionic 1x utilizes angular 1xwhile ionic 2the latest version of the frameworkutilizes angular 2+in addition to the frameworkionic boasts an entire ecosystem to get developers up and running as quickly as possibleionic cloud gives developers various tools to managedeployand scale their ionic applicationsionic creator is a visual editor that allows developers to rapidly prototype and build mobile applications via drag and dropfinallyionic view is a free ios and android app that allows developers to easily share their ionic app with userstestersand clients without having to deploy the application to any app storedevelopers simply invite users via the ionic view appand once an invite has been accepted the user can download and run the developers app inside of ionic view as if the app was installed on their phoneionic enables the development of mobile apps built with web technologies like htmland jstweet this prosbuild mobile apps with familiar web technologies such as htmlionic view allows you to share your ionic app without requiring a user to download ittarget ios and android devices with a single code baseconsionic apps use webviewwhich means the app is for all intents and purposes a web applicationso performance can be slow compared to native applicationsionic requires deep knowledge of angular to get the most out of the frameworknot suitable for complex mobile applications such as games or graphics intensive programsphonegap / cordovaphonegap is very similar to ionic in many respectsit too allows developers to build cross-platform mobile applications with web technologies and is built on top of cordovaphonegap is not tied to any specific javascript frameworkso developers have more choice in how they build their applicationsphonegap boasts an ecosystem comprised of a desktop appmobile appand a cloud service called phonegap build for building and deploying an applicationthere is often confusion in the developer community regarding phonegap and cordovaphonegap was originally founded by nitobiin 2011adobe acquired nitobi and the phonegap brandadobe then donated a version of phonegaprenamed cordovato the apache foundationbut kept the phonegap brand and productcordova can be seen as the engine that powers phonegapamongst other hybrid frameworksphonegap adds additional features and functionality on top of cordovacordova allows you to build cross-platform mobile apps with web technologies of your choicetweet this prosbuild cross-platform mobile apps with web technologies of your choicephonegap build allows you to compile your phonegap apps into ios and android apps without having to install any additional sdksextensive third-party plugin library offering integrations such as mobile paymentstesting frameworksconsphonegaplike ionicuses webview which results in performance challengeslack of standard ui libraryxamarinxamarin comes from microsoft and takes a unique approach to cross-platform app developmentxamarin applications are written entirely in c#xamarin then compiles the c# code into native ios and android distributionsthe underlying layer on which xamarin is built on top of is mono and this enables cross-platform developmentthe benefit of building applications with xamarin compared to cordova-based frameworks is that apps built with xamarin make use each platforms native apisthis means that xamarin apps compile down to native ios and android applications and behave as suchxamarin is not a code oncerun everywhere solutionwhile you can achieve a high level of code shareabilityyou will more than likely need to write specific code for ios and android versions of your appwith xamarinyou will not be able to use native open-source libraries that are available for ios and androidbut you can make use of manynet librariesgetting access to the latest native apis can be slow since the xamarin developers will have to implement them into the framework after they are releasedxamarin allows you to build cross-platform ios and android applications in c-sharptweet this prosdevelopers already familiar with the microsoft ecosystem will feel right at home with xamarin and its use of c#xamarin apps have access to all of the native capabilities of both ios and androidperformance of xamarin apps is comparable to that of natively written applicationsconsalthough you can achieve code shareabilityyou will occasionally need to write platform specific codeyou will need to understand ios and android apis to be able to get the most of out the platformthe licensing model can be difficult to navigate with certain features locked behind professional and enterprise licensesreact nativereact native comes to us from facebook and presents a framework for building cross-platform mobile applications with reactreact native is comparable to xamarinwherein apps created with react native are indistinguishable from native ios and android apps written in objective-c or javareact native combines the easy to learn syntax of react but also enables developers to write objective-cswiftor java when needed for additional performance or tuningthis means that developers can use existing native libraries in their react native appsreact native also comes with many ui components such as buttonsslidersand modals that allow developers to get up and running quicklyreact allows developers to build native ios and android apps with react and javascripttweet this prossince react native apps run native apisthe performance is comparable to true native appsyou can use native libraries and write objective-cor java if needed to further optimize performancethe standard ui component library is extensive and provides many features out-of-the-boxconsrequires extensive knowledge of reactdepending on use caseyou may end up writing a lot of native code and then plugging it into react nativewhich means youll need objective-c or java knowledgewhile react and react native are open source projectsfacebook has faced criticism of its bsd+patents licensing modelprogressive web appsprogressive web apps aim to make web applications behave like their native counterpartsthis project comes to us from google and presents a very interesting propositionprogressive web apps aim to be reliablefastand engagingthis means that apps should load fastpresent an engaging and fluid user experienceand support native features like push notifications or offline accessthe pwa spec will add new features and functionality over timedevelopers can then choose how many features they wish to implementpossibly making pwa the most flexible way to reach mobile usersprogressive web apps are unique for two major reasonswhile they can beinstalledon a users homescreenthey are not delivered through the app store or google playinsteadwhen a user visits a pwathey are presented with an option to add it to their homescreenthis is interesting because it gives the developer the power to deliver and update their applications without forcing the user to do anythingin additionprogressive web apps can be scraped and indexed by search enginesthis significantly increases discoverability and opens doors for deeper integrations in the futureprogressive web apps allow developers to add mobile features to existing web applicationstweet this prosno need for separate code baseyour web application is your mobile applicationapp will be indexed and discoverable through search enginesapp does not need to go through the app store or google play to bes mobile deviceconslimited support for pwa on ioslack of access to many native apisapp wont be accessible through app store or google playauthentication with hybrid app frameworksmobile applications present various user and identity challengesluckilyauth0 has your backour identity solution is platform agnostic and we have plenty of resources to get you up and running as quickly as possiblesign up for a free auth0 accountand then follow any of these guides to get user authentication for your app in no time at allionic - quickstartionicquickstartionic 2tutorialphonegap - quickstartxamarin - quickstarttutorialreact native - quickstartiosandroidprogressive web apps - tutorialconclusionmobile application development is more accessible than everwhether you are a full-stack developeraspiring engineeror have decades of experience in the microsoft ecosystemyou can build great mobile applications that can run on billions of devices todaythere may not be clear a winnereach platform has pros and consbut the important thing is you have a plethora of optionsas the old adage goesuse the right tool for the jobhopefully youve learned more about native app alternatives and can make an informed decision about developing mobile applications", "image" : "https://cdn.auth0.com/blog/alternatives-to-native-mobile-development/logo.png", "date" : "January 10, 2017" } , { "title" : "Risks Posed By Legacy Authentication Webinar", "description" : "Username and password are not enough. So why do most companies still use them?", "author_name" : "Martin Gontovnikas", "author_avatar" : "https://www.gravatar.com/avatar/df6c864847fba9687d962cb80b482764??s=60", "author_url" : "https://twitter.com/mgonto", "tags" : "legacy-authentication", "url" : "/risks-posed-by-legacy-auth-webinar/", "keyword" : "theres a technology basics white paper first published in 2002 entitled “username and passworda dying security model” this paper documents the high risk level associated with legacy authenticationusing username and passwordand predicted that secure access methods using biometricssingle sign onssoone time password tokens and multi-factor authenticationwould quickly replace the use of traditional passwordsfast forward to a decade and a half later and the majority of companies are still using username and password authenticationwe all agree that legacy authentication is not good enoughso why do most companies still use themfor manythe path to modern authentication seems difficult and expensivewhile others worry about the impact on user experiencejoin me and garrett bekkerprincipal security analyst of 451 researchfor a webinar where we will explore this paradox and discuss practices for making securemodern authentication fast and easy for developers and simple and frictionless for usersthe webinar will be live online jan 18 900 am pst or after on demandregister now", "image" : "https://cdn.auth0.com/blog/legacy-auth-webinar/logo.png", "date" : "January 06, 2017" } , { "title" : "Streamlining a search experience with ASP.NET Core and Azure Search", "description" : "Harness the potential of Azure Search's scalable and powerful engine through ASP.NET Core.", "author_name" : "Matías Quaranta", "author_avatar" : "https://s.gravatar.com/avatar/7752008352217db815996ab04aec46e6?s=80", "author_url" : "http://twitter.com/ealsur", "tags" : "azure", "url" : "/azure-search-with-aspnetcore/", "keyword" : "tldrin this articlewell delve into azures search-as-a-service solutionunderstand its core features and benefitsand finallyintegrate it with auth0 and azure documentdb on adatabase implementationa full working application sample is available as a github repositorythe quest for searchwhether you are a start-up with a great app or a company with established products on the marketyou will face the complexity of providing a search experience for your usersthere are plenty of options available that require expensive infrastructurecontinuous maintenancea lengthy ramp up process to achieve a working and efficient solutionand a dedicated team to keep it working afterwardbut what if you could achieve the same or even better results in a matter of minuteswith zero maintenance and much lower costswhat if i told you there is a solution that will let you to stop wasting time on maintaining infrastructure and focus on what really matterscreating and enhancing the best possible products for your clientsenter azure searchazure search is a managed cloud search-as-a-service engine that fits your businessbudget and can scale easily as your data growswith just a few clicksits a service that provides a full-text search experience in more than 70 languageswith features such as faceting and filteringstemmingreal-time geolocationand auto-suggestion support with a latency in the order of milliseconds---even when dealing with millions and millions of recordsif you add complete reporting support on microsoft powerbicustomizable business logicand phonetic analysisall without ever needing to worry about infrastructure maintenance or platform updatess a no-brainerazure searchs engine is not only fast---it will enable you to get things done faster and save you countless of implementation hours in the processyou can have a working production-proof scenario in a matter of minutesdata inresults outazure search stores data in indexes and performs searches on themmuch like your beloved sql indexesthey are meant to store key information for your search logiceach index contains fields and each field has a typeaccording to the entity data modelaspxand a set of attributessupported types are edmstringedmbooleanint32int64doubledatetimeoffsetgeographypointand collectionavailable attributes applicable to fields areretrievablecan be retrieved among the search resultssearchablethe field is indexed and analyzed and can be used for full-text searchfilterablethe field can be used to apply filters or be used on scoring functionsnext sectionsortablethe field can be used to sort resultssorting results overrides the scoring order that azure search providesfacetablethe field values can be used to calculate facets and possibly used for filtering afterwardkeythe primary unique key of the documenta simple and visual representation of these types and attributes are visible during the azure portal index creation experiencealternativelyyou can use the rest api to achieve the same resultnow that our index is readywe need to load in datawe have several optionspush datasending your data programmatically to azure searchs indexes can be achieved using the rest api or through thenet sdkthis option provides very low synchronization latency between the contents of your database and the index and lets you upload information regardless of where the data ispull datain this modelazure search is capable of pulling data from a wide variety of data sources includingazure sql databaseazure documentdbazure blob storagesql server on azure vmsand azure table storagethe service will poll the data source through indexers on a configurable interval and use time stamp and soft-delete detection to update or remove documents from the indexindexers can be created using the api or using the portalthey can be run once or assigned a scheduleand they can track changes based on sql integrated change tracking or a high watermark policyan internal mark that tracks last-updated time stampsonce your data is inyou can start by doing some searchesyou can do it using thenet sdk or rest api we mentioned beforebut you can also do it directly from inside the azure portal without a single line of code through the search exploreryou can even use any of the query parameters specified on the documentation when you use the explorerby defaultazure search applies the tf-idf algorithm on all attributes marked as searchable and calculates order by the resulting scorewe can customize this behavior withscoring profiles in the next sectionthe search experienceazure search has a very powerful set of features that will empower you to create the ultimate search experienceamong the most used onesfacets and filters let you create drill-down navigation experiences like thos proficed by the most popular e-commerce sites by providing real-time statistics on result filters and enabling your users to apply them to further narrow their searches- search suggestions that cover auto-complete scenarios from within the search box- advanced querying for complex scenarios by supporting lucene query syntaxincluding fuzzy searchproximity searchterm boostingand regular expressionsas we mentioned earlierresults are treated with the tf-idf algorithm to calculate the result scorebut what if we dont want the default behaviorwhat if our documents have attributes that are more relevant than othersor if we want to provide our users with geo-spatial supportfortunatelywe can do this withscoring profilesa scoring profile is defined bya namefollowing naming rulesa group of one or more searchable fields and a weight for each of themthe weight is just a relative value of relevance among the selected fieldsfor examplein a document that represents a news article with a titlesummaryand bodyi could assign a weight of 1 to the bodya weight of 2 to the summarybecause its twice as importantand a weight of 35 to the titleweights can have decimalsoptionallyscoring functions will alter the result of the document score for certain scenariosavailable scoring functions arefreshnessfor boosting documents that are older or neweron an edmdatatimeoffset fieldraising the score of the current months news above the restmagnitudefor boosting documents based on numeric fieldand edmvaluesmostly used to boost items given their pricecheaper items are ranked higheror number of downloadsbut can be applied to anylogic you can think ofdistancefor boosting documents based on their locationgeographypoint fieldsthe most common scenario is theshow the results closer to mefeature on search appstagused for tag boosting scenariosif we know our userswe canthem withthe product categories they like moreand when they searchwe can boost the results that match those categoriesproviding a personalized resultfor each userscoring profiles can be created through the api or on the portalthe big pictureafter creating our service and consuming it for some timewe may be wonderingcan i see how frequently the service is being usedwhat are the most common queriesare users searching for something i cant provide answers forwe only need to have an azure storage account on the same region and subscription as our azure search service and use the azure portal to configure itafterwardwe can either download the data or consume it with another servicesuch as microsoft powerbiwith a content packmixing it all togethertools of the tradeif you followed our previous postif you didnti recommend you doyou already integrated auth0 with azure documentdb as adatabase provider to store your userssince we will be working on aspnet coreyou can obtain the runtime and client tools here for any platformeverything i mention in this article will be open-source and cross-platformand at the endall the code will be available in the github repositoryll start with a base template by running dotnet new -t web on our command linethis will create a basic aspnet core web app on our current folderanother alternative is to use the widely known yeomans aspnet generatorto install yeomanyou need an environment that has npmnodejs package managerwhich comes with the nodejs runtimeonce that npm is availableinstalling yeoman is as simple asnpm install -g yoand installing aspnet generator withnpm install --global generator-aspnetonce the generator is installedwe can create our basic app by runningyo aspnetand picking web application basicthis creates a simple aspnet core mvc web application that you can try by running dotnet restore and dotnet run on the created folderyou can also follow the next steps with a preexisting aspnet core applicationcontinuing after this groundworkwe will create a personalized auth0 sign-up pagestore our usersinformation on documentdbleverage azure searchs indexers to index all this dataandfinally create a search experience on aspnet core for maximum performanceourlockyou will initially need your auth0 clientidsecretand domainwhich you can obtain from your dashboardauthentication will be handled by openid connectso we will need to configure it firstwe need asps openid connect packageso well add that to our dependenciesdependencies{microsoftaspnetcoreauthenticationopenidconnect10}after thatwe need to configure and include the service on our asps pipeline on our startupcs file using the domainclientidand secret that we obtained from the dashboardpublic void configureservicesiservicecollection services{ servicesaddauthenticationoptions =>optionssigninscheme = cookieauthenticationdefaultsauthenticationschemeservicesconfigure<auth0settings>configurationgetsectionauth0// configure oidc servicesopenidconnectoptions>{ // specify authentication scheme optionsauthenticationscheme =// set the authority to your auth0 domain optionsauthority = $https//{configuration[domain]}// configure the auth0 client id and client secret optionsclientid = configuration[]clientsecret = configuration[clientsecret// do not automatically authenticate and challenge optionsautomaticauthenticate = falseautomaticchallenge = false// set response type to code optionsresponsetype =code// set the callback pathso auth0 will call back to http//localhost5000/signin-auth0 // also ensure that you have added the url as an allowed callback url in your auth0 dashboard optionscallbackpath = new pathstring/signin-auth0// configure the claims issuer to be auth0 optionsclaimsissuer =//other things like mvc}public void configureiapplicationbuilder appihostingenvironment envioptions<oidcoptions{ appusecookieauthenticationnew cookieauthenticationoptions { automaticauthenticate = trueautomaticchallenge = true }// add the oidc middleware appuseopenidconnectauthenticationvalue}we can store these settings on an appsettingsjson file for programmatic accesslets start customizing our usersprofiles by creating asign-up experience using auth0s lockwe can achieve this by creating an mvc accountcontroller and a login viewwhich will hold the locks codeand use an extension to create the openid connect context informationthe syntax is pretty clearonce we add the lock javascript library we can proceed to initialize it using the additionalsignupfields attributewhich is an array of objects that describe new data fields for our users to fill during sign-upadditionalsignupfields[{ nameaddressplaceholderenter your addressicon/images/locationpngprefillstreet 123validatorfunction{ // only accept addresses with more than 10 chars return valuelength >10} }{ typeselectnamecountrychoose your location[ {valueuslabelunited states{valuefrfrancearargentina} ]/images/country}]this example will prompt for two extra fieldsone a text valuethe other a restricted option on a selectorour lockwith some other extra fieldswill end up looking like thisall these extra fields get stored on our azure documentdb database inside the user_metadata attribute as part of the json user documentindexing usersif you recall one of the features we mentioned earlierazure search is capable of pulling data with indexers from azure documentdb databases automaticallywe can start by creating an azure search accountthe service includes a free tier that has all the features of the paid ones with some capacity restrictions000 documentswhich are enough for tests and proofs of conceptonce our account is createdwe will need to set up the import pipeline by selecting import datanextll search for our azure documentdb database among the available sourcesafter selecting our databasewe can customize the query that obtains our documentsso we will flatten the data generated by auth0 by configuring this querykeep in mind that the user_metadata attribute will hold your ownfieldsin our case the addressgenderand descriptionso edit this query accordinglyonce the source is setazure search probes the database for one document and provides us with a suggested index structurewe will mark each fields attributes depending on the search experience we want to providedata that comes from closed value lists are good filterable/facetable candidates while open text data is probably best suited for searchableadditionallywe will create a suggester that will use our usersemail to provide an auto-complete experience later onafter configuring the index structurewe are left with just the pulling schedule that will define how often our indexer will look for new information in our databasethis includes automatic change tracking anddeletions tracking by a configurable soft-delete attributethe indexer will run and detect new documentswe can always keep track of every run through the portalfinallyyou will need to write down your access keys so you can use them on the next sectioncreating our uxwith our index ready and our lock configuredwe need to add azure searchs nuget package to our project by adding the dependencyazuresearch3we will use asps dependency injection to create a singleton service which will act as wrapper over azure searchthe services full code can be viewed on githubit is createdso you can reuse it on your own projects outside of this article and as a stepping stonethe key part of that service is the in-memory cache of isearchindexclientseach client lets you connect to one index andinternallyit works mostly like an httpclientbased on the most common error with httpclients in our best interest to reuse each isearchindexclient to avoid socket exhaustion with a concurrentdictionarysince our service is injected as a singletonprivate searchserviceclient client//maintaining a dictionary of index clients is better-performantprivate concurrentdictionary<isearchindexclient>indexclientspublic searchservicestring accountnamestring querykey{ client = new searchserviceclientaccountnamenew searchcredentialsquerykeyindexclients = new concurrentdictionary<}/// <summary>/// obtains a new indexclient and avoids socket exhaustion by reusing previous clients/// </summary>param name=indexname></param>returns>/returns>private isearchindexclient getclientstring indexname{ return indexclientsgetoraddclientindexesgetclient}finallyll register our service on our startupcs as a singleton by providing our account name and key we obtained from the portal{ //oidc configuration//injecting azure search service servicesaddsingleton<isearchservice>new searchserviceconfiguration[//other things like mvc}this will enable you to inject the service on any controllerprivate isearchservice _searchservicepublic searchcontrollerisearchservice searchservice{ _searchservice = searchservice}using this viewmodel to support client-to-server communicationspublic class searchpayload{ public int page { getset}=1public int pagesize { get} = 10public bool includefacets { get} = falsepublic string text { get} public dictionary<string>filters { get} = new dictionary<publicfacets { get} = newpublic string orderby { get} =public string querytype { getsimplepublic searchmode searchmode { get} = searchmodeanypublic string scoringprofile { get} }once the wiring is dones just a matter of creating interfacesyou can use any client framework of your choice to do sousing angularjs for examplewe can create a ui that provides for a faceted/filterable search experienceand even an auto-complete experience using the suggester we created previouslycode samples for each experience are available at the repositoryconclusionazure search is a scalable and powerful search engine that takes the infrastructure problem out of our hands and provides us with an easy-to-use api and visual tooling in the azure portalonce againwe can see how great services and technologies can be integrated to achieve a better user experienceazure search adds an almost-limitless search feature on top of auth0 and azure documentdb thatpaired with aspyields a cross-platform and efficient solution", "image" : "https://cdn.auth0.com/blog/azure-search/logo.png", "date" : "January 05, 2017" } , { "title" : "Managing authentication in your Ruby on Rails 5 app with Auth0", "description" : "Learn how to create an application in Rails 5 with Auth0.", "author_name" : "Amin Shah Gilani", "author_avatar" : "https://secure.gravatar.com/avatar/e97345f1125996ea6e1a8394fd45da28", "author_url" : "https://amin.gilani.me", "tags" : "ruby-on-rails", "url" : "/rails-5-with-auth0/", "keyword" : "rails 5 is out with action cablea brand new api modeand best of allrake tasks inside railsthe existing quickstart at auth0 aims to get you up and running really fastbut in this tutorialwell create a new application that compartmentalizes your code appropriatelydoes everything in the rails waythis will lead to a stronger base on which to grow your applicationas an added bonusthis application will be compatible with pundit right out of the boxsetting up an auth0 powered rails apptheres already an auth0 tutorial on making a ruby on rails appbut it skips over a few best practices to keep things simpleill walk you through a more powerful initial setupgenerating a rails appif youre working with railsyou already know thisbut i like to keep things completere also going to be using postgresql as our databaseeven in developmentits good practice to reflect your production environment as closely as possible in developmentand databases can be particularly tricky since some migrations that work withsaysqlite wont work with postgresql$ rails new auth0_setup --database=postgresqlsetting up gemsomniauth is a flexible authentication system that standardizes authentication over several providers throughstrategiesauth0 already has an omniauth strategy designed for drop in useadhering to best practicesre going to be storing secrets in environment variables instead of checking them into our codeto make it easier to setup environment variables in developmentll need the dotenv gemadd the following to your gemfile and run bundle install# standard auth0 requirementsgemomniauth~>13gemomniauth-auth04# secrets should never be stored in codegemdotenv-railsrequiredotenv/rails-nowgroup[developmenttest]setup your environment variablesdotenv will load environment variables stored in theenv fileso you dont want to check that into version controladd the following to yourgitignore and commit it immediately# ignore the environment variablesenvnow we can safely store our secretscreate aand copy your auth0 tokens from the settings page of your clientauth0_client_id= #insert your secret hereauth0_client_secret= #insert your secret hereauth0_domain= #insert your secret heresetup app secretsinstead of referring to the secrets directly in your codefetch them once in the secrets filewhere they should beand refer them via this file throughout your codemake the following changes to your config/secretsyml# add this to the top of the filedefault&default auth0_client_id<%= env[auth0_client_id] %>auth0_client_secretauth0_domain# make the rest of your groups inherit from defaultdevelopment*defaulttestproductioncreate an initializerinitializers are loaded before the application is executedlets configure omniauths auth0 strategy and add it to the middleware stackcreate config/initializers/auth0rb to configure omniauth# configure the middlewarerailsapplicationconfigmiddlewareuse omniauthbuilder do providerauth0railssecretscallback_path/auth/auth0/callbackendcreating pagesafter authenticating the userauth0 will redirect to your app and tell you the if the authentication was successfulwe need two callback urlsone for auth0s response after an authorization request and one for us to redirect to and handle failurell talk more about the second one laterfor now lets name them callbackand failure respectivelythey dont need any htmlcssor javascript associated with themwe also want two pages for our simplistic appa publicly accessible home pageand a privately accessible dashboardthese will be in their own controllersrails g controller publicpages home &rails g controller dashboard show &rails g controller auth0 callback failure --skip-template-engine --skip-assetstroubleshootif you get errors running your app at this pointyou should probably setup your database with rails dbsetup &rails dbmigratenow lets wire up the routes to our controllers and actionsmake the following changes to config/routesrb# home pagerootpublic_pages#home# dashboardgetdashboard=>dashboard#show# auth0 routes for authenticationgetauth0#callbackget/auth/failureauth0#failuresetup the auth0 controllerreplace the file in /app/controllers/auth0_controllerrb withclass auth0controller <applicationcontroller # this stores all the user information that came from auth0 # and the idp def callback session[userinfo] = requestenv[auth] # redirect to the url you want after successful auth redirect_to/dashboardend # this handles authentication failures def failure @error_type = requestparams[error_type] @error_msg = requesterror_msg] # todo show a failure page or redirect to an error page endendyou may want to finish the todo above with your ownbehaviorauth0 only allows callbacks to a whitelist of urls for security purposeswe also want a callback for our development environment so specify these callback urls at application settingshttps//examplecom/auth/auth0/callbackhttp//localhost3000/auth/auth0/callbackreplace httpscom with the url of your actual applicationcreating a login pageauth0 provides a beautiful embedded login form called locks designed to work with auth0 and looks absolutely gorgeousreplace the contents of app/views/public_pages/homehtmlerb<div id="root"style="width320pxmargin40px autopadding10pxborder-styledashedborder-width1pxbox-sizingborder-box">embedded area</div>script src="//cdncom/js/lock/102/lockminjs"/script>script>var lock = new auth0lock'%= railsauth0_client_id %>auth0_domain %>{ containerroot'{ redirecturlresponsetypecode'params{ scopeopenid email'// learn about scopes//auth0com/docs/scopes } } }lockshowan auth0 helpercoming from using devise for authentication in railsi liked the helpers it gave so lets recreate those as closely as possibleadd the following to app/helpers/auth0_helperrbmodule auth0helper private # is the user signed in# @return [boolean] def user_signed_insession[userinfo]presentend # set the @current_user or redirect to public page def authenticate_user# redirect to page that has the login here if user_signed_in@current_user = session[userinfo] else redirect_to login_path end end # whats the current_user# @return [hash] def current_user @current_user end # @return the path to the login page def login_path root_path endendfor this helper to be available throughout your applicationadd this line to your app/controllers/application_controllerall other controllers inherit from application controllerinclude auth0helpershowing user info in the dashboardwe dont really have any content to show in our sample application at this point so lets make our dashboard show the users picture and name upon login# app/controllers/dashboard_controllerrbclass dashboardcontroller <applicationcontroller before_actionauthenticate_userdef show @user = current_user endendand then in our app/views/dashboard/showerbdiv>img class="avatar"src="%= @user[info][image] %>/>h2>welcome <name] %>/h2>descriptive errorsremember the failure callbackwhen authentication failsyou want to handle it gracefullyso on unsuccessful authentications make omniauth internally redirect there and pass along an error descriptionadd this to your config/initializers/omniauthrbomniauthon_failure = procnew {envmessage_key = env[errortype] error_description = rackutilsescape]error_messagenew_path =#{env[script_name]}#{omniauthpath_prefix}/failureerror_type=#{message_key}&error_msg=#{error_description}rackresponsenew302 moved302locationnew_pathfinish}overflowing cookies in developmentcookies have a 4kb limitwhich is too small to store our users information inmore details can be found here but to make your app work in developmentadd this to /config/initializers/session_storerbrailssession_storecache_storeadd this to the end of the config block in /config/enviroments/developmentrb so that it overrides all other instances# enforce this ruleconfigcache_store =memory_storeconclusioncongratulationsyou now have an application thatdoes not store any user information in the databasehandles authentication statelesslystores configuration secrets in environment variablesprovides a devise-like current_userfollows the rails way in everythingif you use pundit for authorization it will work out of the box with your setup since it hooks onto current_user", "image" : "https://cdn.auth0.com/blog/rails-with-auth0/logo.png", "date" : "January 03, 2017" } , { "title" : "2017 Budget Planning for Technology Startups: Authentication is Key", "description" : "We take a look at budget planning for startups and find out why making authentication part of the budget is more important now than ever", "author_name" : "Diego Poza", "author_avatar" : "https://avatars3.githubusercontent.com/u/604869?v=3&s=200", "author_url" : "https://twitter.com/diegopoza", "tags" : "budget", "url" : "/2017-budget-planning/", "keyword" : "2017 is coming and with it so do many tough decisions for startups and small businesses alikebudget planning is perhaps the hardest of themin this short article we will take a look at what is important for tech startupshow to aim for success in the coming yearand why you should consider authentication an important part of your budgetread onbudget planning is toughand authentication should be a part of ittweet this introductiondecemberthe perfect time to review the failures and successes of last yearstartups fail and succeed based on their visionso sitting back and reflecting on the things that went wrong is key to eventually becoming a successfulself sustaining companyan important part of this job is finding where exactly it is wise to useor investthe precious money from your investorsand in the world of ever cheapercommoditizedsoftware engineersit is tempting to saylets roll this feature in-housethis might be a great callthink of amazonthey developed a whole class of internal appliances and servicesthey invested millions on the necessary infrastructureandnowin retrospective we can say it was the right callbut what would have happened had amazon made that same decisionto invest millions upon millions on the development of their own infrastructure and softwaretodaywould amazon web services have been a successwould their investment have payed off eventuallyit is hard to saybut it probably would have costed them moreso finding the right idea on which to investor developis not just about the ideabut about the right time to do soit is safe to say there are great alternatives for many things todaybuyor develop them in-housethis is probably the single most difficult call software startups have to make todayan error in this area can make or break a startup or small businessplaying it safetheres no success without some riskso identifying areas where your company can fail is of the utmost importancenot only will this way of thinking show you great growth or investment opportunitiesbut it will also make it very clear which wars are worth fightingfor amazonyeahdeveloping world-classglobally distributeddatacenters was a war worth fightingthey had the visionthe opportunitythe timefundamentallya war chest big enough to survive failure in case they could not monetize their investment later ona different exampletake a look at state-of-the-art chip companieskeeping up with the pace of development and manufacturing requirements for the products that are coming out to the market is so crushingly exhaustingfrom an economic point of viewthat it makes absolutely no sense to invest in manufacturing facilities unless you can keep the facilities at 100% full capacitythis results in interesting scenarios where competitors in the market are associates in manufacturing competing productsintelprobably the most advanced cpu manufacturer in the worldis the only cpu company that also owns their own manufacturing facilitiescalledfabsin the semiconductor industryamd used to own their own fabsbut it was so difficult to justify the expense of keeping manufacturing in-housethat they had to sell themarmnvidia and apple do not own any fabs eitherwe can spot a certain trend when it comes to technology companiesit is usually the first-to-market company that can reap most of the benefits of a big investmentthis may sound too far removed with regards to startupsbut think of it this waymost of the timea startup is operating so close to running out of moneythatunless that startup is developing a first-to-market productit is usually not worth investing in keeping that something in-housein the old daysevery company kept their own mail servernow you would be crazy to use something other than one of the corporate email providers such as gmail or outlookthey do what you could do much better and at a fraction of the costthere are specific cases where a company might decide to invest in keeping a critical piece of infrastructure in-housefor examplea government institution might consider keeping an in-house email server a requirement to avoid denial-of-service attacks in case of conflict with the country that hosts google or microsofts serversthis is not the casehoweverwhy you should not develop authentication in-houseauthentication is essential to all technology startupsunfortunatelyit is a complex subjectnot only a simple username/password is simply not enoughand inconvenientbut sensitive stuff can be accessed if your authentication system is compromisedin a senseauthentication and authorization are probably one of the most sensitive areas for technology startupsit appears deceivingly simple from the outsidebut becomes a thorny subject as you start to learn its intricaciesis your authentication system secured by two-factor authenticationis an sms fallback availableare your users using passwords which have already been leaked from other sourcesare social logins implemented to bring more users to your platformcan your system handle hundreds of thousands of usersare your enterprise accounts supported and linked with your newer servicesthese are all tough questions for every startupbut with auth0 things need not be this waysign up for a free auth0 account and learn why more and more companies are using an external authentication solutionyou can get up and running in a matter of minuteswith support for features such asa visual dashboard for managing all settingstop tier security and availabilityusername/password authenticationsocial loginsenterprise federationpasswordless loginsmultifactor authenticationsingle-sign-onbreached password detectionprogrammable ruleseasy integration for mobile and web appslet us handle authentication for you so you can truly focus on what mattersdelivering your productconsider the case of enterprise federationthe enterprise sales cycle is already too longmeetingscontractsand due diligence slow the sales cyclehaving to build authentication and enterprise federation proof of concepts for various enterprise connections delays the process furthernot being able to support enterprise federation the way a customer requires can cost you the salehaving enterprise federation should be acheck-the-boxitem rather than a feature to be scrutinizedauth0 features a comprehensive enterprise toolkit that supports features like federation through all major identity providerssingle sign onssoauditinganalyticsand enhanced security features like multifactor authentication and anomaly detectionmany of these features can be enabled and configured with the flip of a switch and a few lines of codehaving a modern authentication platform does not result in increased revenue alonefreeing up your developers to focus on building the unique features of your businesson the other handdoesother stuff to keep in mindunfortunatelybudget planning is not all about what you can do in-house or buy from external providersit also has to do with strategyhere are some things we have found can make a whole world of differencemarketingwe have found marketing is not all about showing your product or letting other people know your product or service existsit is also about creating an audiencerather than develop a product and then try to get people to but itplan a marketing budget that includes elements that are useful to the audience your product caters forin other wordsconsider reducing the money spentor keeping it stablein promoting a productand increasing the money spent on building a community around your companykeep your employees happythis might seem obviousbut how many times have we seen a key developer leave early on becase he or she just wasnt happy with their jobstartups must value their top employeesit it the people that fundamentally trust the company and invest more than just their working hours in it that can make or break a productstartups dont have the time or money to cope with employee turnaroundbut dont be naivekeeping employees happy is not all about compensationconsider keeping a part of the budget dedicated to out-of-office activities that employees can enjoyconsider buying them equipment that they can take homeconsider running polls and asking them what would improve their mood at the officeof coursethis should never cause compensations to be affecteduse data to make choicesthis can be applied to any department in your organizationwhenever you make a choicemake sure you have some way of collecting quantifiable data that can later be analyzed and turned into coldhard facts about whether that choice was rightconsider including a dedicated data team for this purposes in the budgetgive them access to as many choices as possible and listen to their resultscreate a planning teamusing data has to do with learning about the results of a previous choicea planning teamtries to analyze things before making a choicefor instancehave a dedicated team come up with budget and time-to-market figures for the development of in-house solutions before committing to doing themcompare that with the external alternativesin factdo this for the authentication aspect of your companyyou will be surprisedconclusionbudget planning is always a tough time for big and small companiesstartups depend on making the right calls to survive until they can deliver a productkeeping sharp focus on your product lets you use your resources and time in a much more efficient waythere is no point in developing in-house solutions to solved problems unless forced to do sosuccessful companies understand this from the get-go and invest appropriately in the right external services and productsauthentication isone of those servicestough to get right from the ground-upand with great choices in the marketinvest wisely and reap the benefitsthe sky is the limit", "image" : "https://cdn.auth0.com/blog/budget/logo.png", "date" : "December 29, 2016" } , { "title" : "Introduction to Progressive Web Apps (Push Notifications) - Part 3", "description" : "Progressive Web Apps are the future. Learn how to make your mobile web app native-like by making it work offline, load instantly and send push notifications.", "author_name" : "Prosper Otemuyiwa", "author_avatar" : "https://en.gravatar.com/avatar/1097492785caf9ffeebffeb624202d8f?s=200", "author_url" : "http://twitter.com/unicodeveloper?lang=en", "tags" : "pwa", "url" : "/introduction-to-progressive-web-apps-push-notifications-part-3/", "keyword" : "tldrweb development has evolved significantly over the years allowing developers to deploy a website or web application and serve millions of people around the globe within minuteswith just a browsera user can put in a url and access a web applicationwithprogressive web appsdevelopers can deliver amazing app-like experiences to users using modern web technologiesin part 1 and part 2 of this tutorialwe set up our progressive web appcached the pages and made it work fully offlinethis timewell add the ability to activate push notificationsrecap and introduction to part 3in introduction to progressive web appsoffline firstwe discussed how a typical progressive web application should look like and also introduced the service workerwe also cached the application shellin introduction to progressive web appsinstant loadingwe made the app cache dynamic data and load instantly from locally saved datathis part of the tutorial will coveractivating push notifications in the web appadding a web application manifest to make the web app installablepush notificationsthe push api gives web applications the ability to receive push notification messages pushed to them from a serverthis works hand in hand with the service workerthis is the process of a typical push notification flow in a web applicationthe web app brings forward a popup asking the user to subscribe to notificationsthe user subscribes to receive push notificationsthe service workers push manager is responsible for handling the users subscriptionthe users subscription id is used whenever messages are posted from the serverevery user can actually have a customized experience based on their subscription idwith the help of the push listens and is ready to receive any message coming inimplementationthis is a quick summary of the process of how well set up push notifications in this web appgive the user an option to click on a button to activate or deactivate push notificationsif the user activatessubscribe the user to receive push notifications via the service workers push managerset up an api to handle saving and deleting of users subscription idthis api will also have endpoints that will be responsible for sending notifications to all the users that have activated push notificationsset up github webhook to automate the sending of notifications immediately a new commit is pushed to the resources-i-like repolets buildcreate a new javascript file js/notificationjs in your projectreference the file in your indexhtml like so<script src=/js/notificationjs>/script>and add the following code the notificationjs like sofunctionwindow{use strict//push notification button var fabpushelement = documentqueryselectorfab__pushvar fabpushimgelement = documentfab__image//to check `push notification` is supported or not function ispushsupported{ //to check `push notification` permission is denied by user ifnotificationpermission ===denied{ alertuser has blocked push notificationreturn} //check `push notification` is supported or not ifpushmanagerin windowsorrypush notification isnt supported in your browser} //get `push notification` subscription //if `serviceworker` is registered and ready navigatorserviceworkerreadythenregistration{ registrationgetsubscriptionsubscription{ //if already access grantedenable push button status if{ changepushstatustrue} else { changepushstatusfalse} }catcherror{ consoleerror occurred while enabling push}} // ask user if he/she wants to subscribe to push notifications and then //subscribe and send push notification function subscribepush{ navigator{ if{ alertyour browser doesnt support push notificationreturn false} //to subscribe `push notification` from push manager registrationsubscribe{ uservisibleonlytrue //always show notification when received }{ toastsubscribed successfullyconsoleinfopush notification subscribedlog//savesubscriptionidchangepushstatus{ changepushstatuspush notification subscription error} // unsubscribe the user from push notifications function unsubscribepush{ //get `push subscription` registration{ //if no `push subscription`then return if{ alertunable to unregister push notification} //unsubscribe `push notification` subscriptionunsubscribe{ toastunsubscribed successfullypush notification unsubscribed//deletesubscriptionid{ console{ consolefailed to unsubscribe push notification} //to change status function changepushstatusstatus{ fabpushelementdatasetchecked = statusfabpushelementif{ fabpushelementclasslistaddactivefabpushimgelementsrc =/images/push-onpng} else { fabpushelementremove/images/push-off} } //click event for subscribe push fabpushelementaddeventlistenerclick{ var issubscribed =checked ===issubscribed{ unsubscribepush} else { subscribepush} }ispushsupported//check for push notification support}the code above is doing many thingsjust relaxill explain the different parts of the codepush notification buttonthis code simply grabs the push notification activation and deactivation buttonfunction ispushsupported}this code checks the browser to determine whether push notification is supportednowits paramount that the service worker has to be registered and ready before you can even try to subscribe a user to receive push notificationssothe code above also checks if the service worker is ready and gets the subscription of the user//to change status function changepushstatus} }change push status to red when active/subscribedchange push status to ash when inactive/unsubscribedthe changepushstatus function simply changes the color of the button to indicate wether the user has subscribed or not// ask user if he/she wants to subscribe to push notifications and then //}this code above is responsible for the pop-up that comes forth asking the user to either allow or block push notifications in the browserif the user allows push notificationsit shows a toast message indicating the approvalthen goes ahead to change the color of the button and save the subscription idif the push manager doesnt exist then it alerts the user that it is not supportednotethe function that saves subscription id has been commented out for nowask the user to allow or block notificationssubscription in the console// unsubscribe the user from push notifications function unsubscribepush}this code is responsible for unsubscribing from push notificationa toast message indicates the unsubscriptionthen goes ahead to change the color of the button and delete the subscription idthe function that deletes subscription id has been commented out for nowunsubscription in the console //click event for subscribe push fabpushelementthis code simply adds a click event to the button to toggle between subscribing and unsubscribing a userhandle subscription idswe have been able to see the push subscription endpointswe need to be able to save the subscription ids of each userwe also need to be able to delete these subscription ids when a user unsubscribes from push notificationsadd this code to your notificationfunction savesubscriptionid{ var subscription_id = subscriptionendpointsplitgcm/send/[1]subscription idsubscription_idfetchhttp//localhost3333/api/users{ methodpostheadersacceptapplication/jsoncontent-typebodyjsonstringify{ user_idsubscription_id }}function deletesubscriptionid3333/api/user/+ subscription_iddelete} }}save and delete subscription idin the code abovewe are extracting the subscription id from the subscription endpoint and posting it to an api servicethe savesubscriptionid creates a new user and saves a subscription idthe deletesubscriptionid deletes a user with its subscription idthis looks weirdwhy post to a remote services simplewe need to have a database of all user subscription ids so that we can send notifications to everyone at onceapi servicethe api service handling the saving and deletion of subscription ids will also handle the actual sending of notificationsthis is a break down of the apiit will have 3 api routes/endpointspost /api/users to create new users and store their subscription idsdelete /api/user/user_id to delete and unsubscribe userspost /api/notify to send notifications to all subscribed userslucky enoughi have coded the api servicemake sure you have node and mongodb installedclone it and run from the terminal with node serverpwa api server running locallymake sure you create aenv file like soenv file for pwa-apinoteyou can through this good tutorial to see how the api service should be set upi simply implemented a nodejs version of the api service in this tutorialwe are using firebase cloud messaging as our messaging servicego ahead and set up a new project with firebaseonce you have done thatgrab the server key from the dashboardproject settings > cloud messaging like socloud messaging on firebase dashboardthe server key should be the value of fcm_api_key in theenv file of the api servicethe server key is needed when posting to firebase cloud messaging via our api servercheck out the notification controller in our api codebaseserver/controllers/notificationservercontrollernotifyusersreqres{ var sender = new gcmsendersecretsfcm// prepare a message to be sent var message = new gcmmessage{ notification{ titlenew commit on github reporiliconic_launcherclick to see the latest commituserfind{}errusers{ // user subscription ids to deliver message to var user_ids = _mapuser_iduser idsuser_ids// actually send the message sendersend{ registrationtokensuser_ids }response{ if} else { return res} }go back to notificationjs and uncomment the savesubscriptionid and deletesubscriptionid functions we commented earlieryour notificationjs should look like this nowsavesubscriptioniddeletesubscriptionid} function deletesubscriptionid} ispushsupporteds try to activate push notification and see if new users are created and stored in the database of our api servicereload your app and push the activate buttonoopsthere is an error in our consoledont fretthe reason why we are encountering this issue is because we dont have a manifestjson file in our web app yetthe interesting thing here is thiscreating a manifestjson file will solve this challenge and add another feature to our appwith a manifestjson filell be able to add our app to a users device homescreen and make the app installableviolago ahead and create a manifestjson file in the root directory like sonamepwa - commitsshort_namepwadescriptionprogressive web apps for resources i likestart_url/indexhtmlutm=homescreendisplaystandaloneorientationportraitbackground_color#f5f5f5theme_coloricons[ {src/images/192x192typeimage/pngsizes192x192/images/168x168168x168/images/144x144144x144/images/96x9696x96/images/72x7272x72/images/48x4848x48} ]authorprosper otemuyiwawebsitehttps//twittercom/unicodevelopergithub//githubsource-repocom/unicodeveloper/pwa-commitsgcm_sender_id571712848651}lets quickly highlight what these keys represent in our web app manifest filerepresents the name of the app as it is usually displayed to the userrepresents a short version of the name of the web applicationprovides a general description of the web applicationis the url that loads when the user launches the web applicationdefines the default display mode for the web applicationthe different modes are fullscreenminimal-uiprovides the default orientation for the web applicationit could be portrait or landscaperepresents the background color of the web apprepresents the default theme color of the appit colors the status bar on androidrepresents the applicationsicon set for the homescreensplash screen and task switcheris akey that represents the author of the appgcm_sender_idrepresents the sender_id from firebase cloud messaging that is used to identify the applicationreplace the sender_id value here with that from your dashboardin your indexhtml and latestreference the manifestjson file like solink rel=manifesthref=/manifestclear your cachereload your application and click the notification buttonsubscription id in my consoleyaaayit workssubscription id in my databaseyaayit got postedyou can see that the subscription ids are thesamemeaning it got posted and saved in the api service databaserobomongo is the ide i use to manage my mongodb databaseyou can try to unsubscribe and see how it deletes the user from the api service databasesending and receiving notificationsin our api servicewe have a /api/notify route that we can make a post request toand our notification will be fired via the firebase cloud messaging servicethats not enoughwe also need a way to listen and accept this notification in the browserservice worker to the rescue againwithin the service workerwe can listen to the push event like soswjsselfpushevent{ consolevar title =var body = {tagwaituntilselfshownotificationtitleadd that piece of code to the swclear your cache &reload your appnow use postman to send a post request to http3333/api/notify like somaking the post request with postmanwhen a notification is firedour browser will welcome the notification like sonotification received in browserafter receiving the notificationwe can decide what to do when a user clicks on the notificationadd this piece of code to your service workernotificationclick{ var url =/latestclose//close the notification // open the app and navigate to latesthtml after clicking the notification eventclientsopenwindowurlherethe code above listens to the event that is fired when a user clicks on the notificationcloses the notification once clickeda new window or tab will be opened re-directing to localhost8080/latestis been called to ensure the browser doesnt terminate our service worker before our new window has been displayedautomate notification sending processwe have been manually making a post request via postmanpracticallywe want the user to get a notification once a commit has been made to the github repositorycom/unicodeveloper/resources-i-like/how do we automate this processever heard of webhooksyesgithub webhooks to the rescueuse the repository url of your choicebecause you will have to make commits and see that this works as you go through this tutorialhead over to the repository of your choicein my case it is httpsgo to settings > webhooks like soclick on add webhook buttonnows time to add a hookthe hook will be our notify api endpointwhen you make a commit on githuba push event is firedwith this webhooka post request will be sent to /api/notify api endpoint whenever a commit is made on this particular repositorysweetlook at the diagram abovehold onwait a minutethere is a strange url in the payload url//ea71f5aangrokio/api/notifywhere is httpsio coming fromhow did we get thatset up ngrokitwe cant use a localhost urlgithub needs a url that exists on the internet so i took advantage of a tool called ngrokwith ngrokyou can expose a local server to the internetinstall ngrokfrom your terminaluse ngrok to ping the port of the api server like so/ngrok http 3333ngrok pinging the local api serverso use whatever url it outputs on the terminal from ngrok in the webhookngrok outputs both http and https urlsso feel free to use any of themthey still map to your local serveronce you have added the webhookgithub immediately does a test post ping to the hook to determine if it is all properly set upgreen mark to indicate that the hook url is validmake a commitwe have set everything upnow go ahead and make a commitonce you do thata push notification will be sent and your browser will receive it like sopush commitreceive push notificationyesthe process has been totally automatedhost pwaone of the requirements of a pwa is to have its content served via httpsfirebase hosting is a very good option for deploying our app to a server that supports httpsour app is now livei also hosted the api on herokufor the app to work fullyi changed the url in the notificationjs file to the live api url on herokui also changed the webhook url to the live api urladd app to homescreenopen up your browser on your deviceespecially chrome and add it like soclick on the ellipsis icon by the rightadd to homescreenapp now on homescreenthe pwa and api code is on githubasideeasy authentication with auth0you can use auth0 lock for your progressive web appwith lockshowing a login screen is as simple as including the auth0-lock library and then calling it in your app like so// initiating our auth0lockvar lock = new auth0lockyour_client_idyour_auth0_domain// listening for the authenticated eventlockonauthenticatedauthresult{ // use the token in authresult to getprofileand save it to localstorage lockgetprofileidtokenprofile{ if{ // handle error return} localstoragesetitemlocalstorageimplementing lockdocumentgetelementbyidbtn-login{ lockshowshowing lockauth0 lock screenin the case of an offline-first appauthenticating the user against a remote database wont be possible when network connectivity is losthoweverwith service workersyou have full control over which pages and scripts are loaded when the user is offlinethis means you can configure your offlinehtml file to display a useful message stating the user needs to regain connectivity to login again instead of displaying the lock login screenconclusionwe have been able to successfully make our app work offlineload instantlyreceive push notificationsand also installableprogressive web apps have a checklisti highlighted the requirements in part 1there is a toollighthouse for auditing an app for progressive web app featuresit is available as a chrome extension and also a clii recommend that you use this tool frequently when developing a progressive web appthis tutorial wouldnt have been possible without ires series on pwatimis server side push notification tutorialgokulakrishnans pwa demo app and the guys at google that work and blog daily about progressive web appsthanks a bunchhopefully youre now ready to dive fully into make your web applications progressive", "image" : "https://cdn.auth0.com/blog/pwa/push_notification_Logo.png", "date" : "December 28, 2016" } , { "title" : "Learn About Inferno JS: Build and Authenticate an App", "description" : "Inferno is a fast, small, React-like JavaScript UI library.", "author_name" : "Kim Maida", "author_avatar" : "https://en.gravatar.com/userimage/20807150/4c9e5bd34750ec1dcedd71cb40b4a9ba.png", "author_url" : "https://twitter.com/KimMaida", "tags" : "inferno", "url" : "/learn-about-inferno-js-build-and-authenticate-an-app/", "keyword" : "this article has been updated to inferno v1xtldrinferno js is a blazing-fastlightweightreact-like javascript libraryreact developers will find it comfortably familiarinferno js also supplies better performancesmaller sizeand other improvementsinferno is highly modular and unopinionatedencouraging developers to add only the pieces we require and write code to suit our personal preferencesin this tutorialwell introduce the inferno javascript librarythen build a simple inferno app and authenticate it with auth0the final code can be found at the inferno-app github repointroduction to inferno jsinferno is a fastlightweight javascript library that resembles reactminified and gzippedinferno weighs in at only 9kbreact gzipped is over 40kbits also extremely performant in benchmarks as well as real-world applicationsinferno can render on both the client and server and at the time of writingit is the fastest javascript ui library that existsthese features are very attractivebut many javascript developers are overwhelmed by the number of libraries and frameworks already out therea few tools have emerged as mindshare and usage leadersreact among themso what are the reasons behind infernos creationwho should use inferno and whywhy was inferno js createdinfernos authordominic gannawaywanted to examine whether a ui library could improve experience for web apps on mobile devicesthis included addressing issues that existing ui libraries had with battery drainmemory consumptionand performanceinferno builds on the same api as react to greatly diminish the barrier to entry and take advantage of the best features of reactthe result was a lightweight and incredibly performant ui library that react developers will find delightfully familiar but also improvedinferno js featuresinferno has many featuresincluding but not limited tocomponent drivenone-way data flow architecturepartial synthetic event systema linkevent featurewhich removes the need for arrow functions or binding event callbacksisomorphic rendering on both client and server with inferno-serverlifecycle events on functional componentscontrolled components for input/select/textarea elementsyou can read more about the features of inferno and how inferno works in the inferno github readme and an indepth inferno interview with dominic gannaway on the survivejs blognotei strongly recommend reading the interview articleit provides the technical details of infernohow it worksand how it compares to similar libraries like react and preactwho should use infernodominic gannaway initially developed inferno to improve performance on mobilehe saysinferno is a great library for building uis for mobile where performance has been poor in other libraries and people are looking around for alternatives—dominic gannawaylearning and using the inferno js librarybecause inferno is built on the same api as reactdevelopers gain several adoption advantages when learning or switching to infernoreact developers will find inferno very familiarresulting in a low barrier to entryno extra time or money is needed to invest in learning a different libraryextensive availability of react resources online means that these tutorials and docs are helpful when learning inferno as wellan inferno-compat package allows developers to switch existing react projects to inferno in just a few lines of codethere is a growing set of inferno packages availablesuch as inferno-reduxinferno-mobxinferno-routerand morethe official inferno website and documentation can be viewed heredominic gannaway also recommends the react courses on eggheadio as well as react tutorials by wes bosin additionresources such as auth0s react quick start and secure your react and redux app with jwt authentication can offer insight into managing authentication with infernodevelopers can get started easily with inferno with the create-inferno-app projectthis is a fork of create-react-app and sets up boilerplate for developingtestingbuildingand serving an inferno appset up an inferno appnow that weve learned a little bit about infernolets build a simple app that calls an api to get aof dinosaursll be able to click a dinosaurs name to display more informations get starteddependencieswell need nodejswith npminstalled globallyif you dont have node alreadydownload and install the lts version from the nodejs websitere going to use create-inferno-app to generate the boilerplate for our applicationinstall create-inferno-app globally with the following command$ npm install -g create-inferno-appcreate a new inferno applets scaffold a new inferno project with create-inferno-appnavigate to a folder of your choosing and run the following commands to create a new app and start the local server$ create-inferno-app inferno-app$ cd inferno-app$ npm startthe app can now be accessed at http//localhost3000 and should look like this in the browserinstall bootstrap cssto style our components quicklys use bootstrapversion 3 is the latest stable release at the time of writingll use npm to install bootstrap$ npm install bootstrap@3 --saveimport the bootstrap css file in the src/indexjs file to make it available in the application// src/indeximportbootstrap/dist/css/bootstrapcssinstall nodejs dinosaurs apiour app needs an apis clone sample-nodeserver-dinos in the root of our inferno app and then rename the repo folder to serverthen well execute npm install to install the necessary dependencies to run our apithe command to rename files or folders is mv on mac/linux or ren on windows$ git clone https//githubcom/auth0-blog/sample-nodeserver-dinosgit$ mv sample-nodeserver-dinos server$ cd server$ npm installour inferno app runs on a local development server at localhost3000 and the dinosaur api runs on localhost3001for brevityre going to run the app by launching the api server and app server in separate command windowshoweverif youd like to explore running multiple processes concurrently with one commandcheck out this articleusing create-react-app with a servercall a node api in an inferno applets start our api serverfrom the server folderrun$ node serverjscreate an api serviceto call our apiwe can build a service to fetch dataapp components can then use this service to make api requestss create a new foldersrc/utilsinside this foldermake a new file and call it apiservice// src/utils/apiservicejsconst api =http3001/api/// getof all dinosaurs from apifunction getdinolist{ return fetch`${api}dinosaurs`then_verifyresponse_handleerror}// get a dinosaurs detail info from api by idfunction getdinoid`${api}dinosaur/${id}`}// verify that the fetched response is jsonfunction _verifyresponseres{ let contenttype = resheadersgetcontent-typeifcontenttype &&contenttypeindexofapplication/json== -1{ return resjson} else { _handleerror{ messageresponse was not json}}}// handle fetch errorsfunction _handleerrorerror{ consolean error occurredthrow error}// export apiserviceconst apiservice = { getdinolistgetdino }export default apiservicethe fetch api makes http requests and returns promiseswe want to create methods to get the fullof dinosaurs as well as get an individual dinosaurs details by idll make sure the response is validhandle errorsand then export the getdinolistand getdinomethods for our components to useget api data and display dino listnow we need to call the api from a component so we can display theof dinosaurs in the uiopen the src/appjs filethis has some boilerplate in it that we can delete and replace// src/appjsimport inferno fromimport component frominferno-componentimport apiservice from/utils/apiservice/appclass app extends component { componentdidmount{ // getof dinosaurs from api apiservicegetdinolistres =>{ // set state with fetched dinosthissetstate{ dinosres }error =>{ // an error occurredset state with error this{ errorerror }} renderpropsstate{ return<div classname=app>header classname=app-header bg-primary clearfixh1 classname=text-centerdinosaurs</h1>/header>app-content container-fluidrow{ statedinosul>{ statemapdino=>li key={dinoid}>{dinoname}</li>} </ul>p>loading/p>} </div>}}export default appif you have react experiencethis should look familiarre new to react and infernoplease check out the react docs to learn about general syntaxstate and lifecyclejsxin the componentdidmount lifecycle hookll call our apiservice to get an array of all dinosaursll set state for our dinosaur array on success and error on failurein the renderwe can pass propsthere are none in this caseand state as parameters so we dont litter our renderfunction with thisfor examplethis way we can use statedinos instead of thisnow that weve replaced the boilerplate jss delete everything in src/appcss and replace it with the following/* src/appcss */app-header { margin-bottom20pxapp-header h1 { margin0padding20px 0app-content { margin0 automax-width1000px}our app now looks like thiscreate a loading componentin our appre simply showing <when there isnt any dinosaur datathis should be an error instead if the api isnt availableyou may have noticed that we put error in the state but didnt use it in renderyets create a small component that conditionally shows a loading image or an error messagecreate a new foldersrc/componentsall components from now on will go into this foldernext create a subfolder called loading to contain our new components filess add a loading imageyou can grab the raptor-runninggif from github heredownload the image into the src/components/loading directorycreate an empty file for the loading cssloading component jsnextadd a new js file called loading// src/components/loading/loadingimport loading from/raptor-loadinggif/loadingclass loading extends component { render{img classname=loading-imgsrc={loading} alt=/>p classname=alert alert-dangerstrong>/strong>could not retrieve data} <}}export default loadings import the loading image and csswhenever we use our loading componentre going to pass the parent components error state as a propertywe can then pass props to our renderfunction to access properties without needing thisll render the loading image if theres no errorand show an alert if there isloading component cssadd the following css to loading/* src/components/loading/loadingloading-img { displayblockmargin10px auto}lets verify that our src file structure looks like thispublicserversrc-components-loading-running-raptor-utils-apiservice-apptest-indexjsadd loading component to appnow we can use this component in our appjs and replace the <element we added earlier/components/loading/loadingrenderloading error={stateerror} />}now while data is being fetched from the apithe running dinosaur gif is shownif we stop the api server and reload the appll see our error message instead of an infinite loading statecreate acomponentlets explore creating a component that displays theof dinosaurs and lets users select one to show additional detailsthis will replace the simple unorderedthat we put in the appcreate a new folder containing css and js files for the dinolist componentsrc-dinolistjslist component jsadd the following code to dinolist// src/components/dinolist/dinolistjsimport inferno{ linkevent } from//loading/loading/dinolist/* this function is pulled out of the class to demonstrate how we could easily use third-party apis*/function getdinobyidobj{ const id = objconst instance = objinstance// set loading state to true while data is being fetched // set active state to index of clicked item instance{ loadingtrueactiveid }// get dino by id // on resolveset detail state and turn off loading apiservicegetdino{ instance{ detailfalsefalse }{ error}class dinolist extends component { constructor{ super// set default loading state to false thisstate = { loadingfalse }dinolistcol-sm-3ul classname=dinolist-{ propsa classname={stateactive === dino} onclick={linkevent{idthis}getdinobyid}>name} </a>col-sm-9loading &error &detail{state} <}}export default dinolistll import linkevent from infernolinkeventis an excellent helper function unique to infernoit allows attachment of data to events without needing bindarrow functionsor closuresthe renderfunction will use the props and state parametersll pass the dinosaurs array to our dinolist component from appjs as a propertythe getdinobyidfunction at the top of the file is the event handler for when a user clicks a dinosaurs name to get dino detailsthis is not a method on the dinolist classthe function is pulled out to demonstrate how components can easily leverage methods from third-party apis with linkeventthe obj parameter comes from the linkeventin renderlike soa classname={state} onclick={linkeventlinkevent can pass dataas well as the eventto a handlerre using it here to pass the clicked dinosaurs id to call the api and apply a class to the active dinosaur in there also passing the instanceso we can use instancein our getdinobyidfunction without context errors or bindingcomponent cssnextadd the following to the dinolistcss file to style the/* src/components/dinolist/dinolistdinolist a { cursorpointerdinolist aactive { font-weightboldtext-decorationunderline}addcomponent to appin the apps replace our unorderedwith the new dinolist component/components/dinolist/dinolistdinolist dinos={statedinos} />}at this pointwhen a dinosaur in thisis clickedthe onlyre showing is the dinosaurs namealsobecause we dont make an api call automatically on loadthe ui will show the loading image in the details area until the user clicks on a dinosaur in theclearly this isnt idealll create a dinodetail component next to display this in a much nicer waycreate a detail componentlets make a new folder for our dinodetail componentsrc/components/dinodetailll only use bootstrap to style this componentso a css file wont be necessarydetail component jslets build the dinodetail// src/components/dinodetail/dinodetailjsimport inferno from 'inferno'import component from 'inferno-component'class dinodetail extends component { render{ let dino = propsreturndiv classname="dinolist"{ dino-group"-group-item-group-item-info"h3 classname="-group-item-heading text-center"/h3>-group-item"h4 classname="-group-item-heading"pronunciation</h4>p classname="-group-item-text"pronunciation}<meaning of name<"meaningofname}"period<period}mya} million years agodiet<diet}<length<length}<p classname="-group-item-text lead"dangerouslysetinnerhtml={{__htmlinfo}}>lead"em>select a dinosaur to see details/em>}}export default dinodetaildespite the large amount of jsxthis is a very simple componentall it does is take a dino property and display dataif there is no dino availableit shows a message that instructs the user to select a dinosaurthe api returns html in some dinosaursinfo propertieswe render this using dangerouslysetinnerhtmlyou can read more about this in the dom elements section of the react docsadd detail component tocomponentnow well replace the detail dinosaur name in the dinolist component with our new dinodetail componentimport dinodetail from/dinodetail/dinodetaildinodetail dino={statedetail} />}note that weve also changed the expression towe no longer want to check for statedetail here because we still want to display the dinodetail component even if there is no detail information available yetwe only added this in the previous step to avoid errorswhen no dinosaur is selectedour app now looks like thiswhen a dinosaur is clickedits details are fetched from the api and displayedthe selected dinosaur receives anactive class in thewhich we styled as bold and underlined in the dinolist css previouslyauthenticate an inferno app with auth0the last thing well do is add auth0 authentication to our inferno appat the momentour sample dinosaur api doesnt have any secured endpoints—but if we need them in the futureauth0s json web token authentication can helpconfigure your auth0 clientthe first thing youll need is an auth0 accountfollow these simple steps to get startedsign up for a free auth0 accountin your auth0 dashboardcreate a new clientname your new app and selectsingle page web applicationsin the settings for your newly created appadd http3000 to the allowed callback urls and allowed originscorsd likeyou can set up some social connectionsyou can then enable them for your app in the client options under the connections tabthe example shown in the screenshot above utilizes username/password databasefacebookgoogleand twitteradd authentication logic to inferno appuse npm to install auth0-lock$ npm install auth0-lock --savenow that auth0-lock is installedwe can use it in our appjs file to implement authentication logicll also need to create two new componentslogin and userthese components are referenced in appjs belowimport auth0lock fromauth0-lockimport login from/components/login/loginimport user from/components/user/userfunction logout{ // remove token and profile from state //using instance passed in by linkevent to preservecontext{ idtokennullprofilenull }// remove token and profile from localstorage localstorageremoveitemid_tokenlocalstorage}class app extends component { constructor// initial authentication state// check for existing token and profile thisstate = { idtokengetitemparse} componentdidmount{ // create auth0 lock instance thislock = new auth0lock[your_client_id][your_domain]com// on successful authenticationlockonauthenticatedauthresult{ // use the returned token to fetch user profile thisgetuserinfoaccesstoken{ if{ return} // save token and profile to state this{ idtokenprofile }// save token and profile to localstorage localstoragesetitemidtokenstringifyof dinosaurs from apiapp-auth pull-rightlogin lock={thislock} />app-auth-loggedinuser profile={stateprofile} />a classname=app-auth-loggedin-logoutonclick={linkeventlogoutlog out<} <import auth0-lock as well as the new components well createlogin will display a link that will launch the auth0 lock widgetuser will display after login and show the authenticated users name and picturein the constructorll check for an existing token and profile from a previous login and set them if availablein componentdidmountll create our lock instancereplace [your_client_id] and [your_domain] with your auth0 client informationon successful authenticationll do the followinguse the access token to fetch user profile with locksave the token and profile to statesave the token and profile to localstorage to persist the sessionfunctionll add the login and user components to the <header>element as well as a logout linkthese will show conditionally based on the presence or absence of an access tokenll pass properties to these componentsandthe logoutpulled out near the top of the appclears the users token and profile from state and removes this data from local storageupdate app csswell add a few more styles to our appcss to support our new markupapp-auth { font-size12px20px 10pxapp-auth a { color#fffcursordisplayinline-blockapp-auth ahover { colorapp-auth-loggedin-logout { border-left1px solid rgba2556margin-left4pxpadding-left}create login componentnext well create the login componentwhen the user is logged outthe app will have alog inlink in the header like soadd the necessary folder and files for our login component-loginjsour loginjs should look like this// src/components/login/login/login// use theprop passed in appjs to// show the auth0 lock widget so users can log infunction showlock{ instanceshow}class login extends component { renderlogina onclick={linkeventshowlocklog in<}}export default loginwe passed our apps lock instance to login so we could access its showmethodthe login component has a link that shows the lock widget when clickedll add just a little bit of css to support this component/* src/components/login/loginlogin { padding10px 0}create user componentfinallyll build the user componentthis will show the users profile picture and name when authenticatedadd the necessary folder and files for our user component-userjsthe userjs file should look like this// src/components/user/user/userclass user extends component { render{ let profile = propslet idp = profileuser_idsplit[0]usertitle={idp}>img src={profilepicture} alt={profilename} />span>{profile/span>}}export default userll display the users picture and nameas a bonuswe can add a title attribute that shows the identity provider that the user signed in withietwitteretcadd a few styles to usercss for alignment and a circular profile image/* src/components/user/useruser { displayuser img { border-radius100pxheight36pxmargin-right6pxwidth}we now have working authentication in our inferno appin the futurewe can use identity management to secure routesconditionally render uicontrol user accesscheck out these additional resources to learn about packages and tutorials that will be helpful for authentication with inferno / react-like appsinferno reduxinferno routerauth0 react quick startreact authentication is easy with auth0conclusionweve learned how to create a basic real-world application with infernove also explored some of the features inferno has that its predecessors lacksuch as linkeventve demonstrated how simple it can be to utilize inferno with external methodsinferno author dominic gannaways favorite feature is lifecycle hooks for functional componentssomething we didnt explore in this tutorial but should certainly be utilized by developers who prefer a functional component approachto learn more about the inferno js libraryyou can peruse the docs at infernojsorg or get in touch with the community and development team by checking out the inferno github and inferno slackre a javascript developer trying to improve performance and reduce filesize in your web appscheck out infernoeven if you dont have react experienceinferno is easy to learn thanks to the abundance of react resources and tutorials availablehopefully youre now ready to get started with inferno in your projects", "image" : "https://cdn.auth0.com/blog/inferno/Inferno_Logo.png", "date" : "December 27, 2016" } , { "title" : "Auth0 Named in Seattle’s 10 Hottest Entrepreneurial Ventures for 2016", "description" : "Auth0 had the privilege of joining the Seattle 10 Class of 2016", "author_name" : "Martin Gontovnikas", "author_avatar" : "https://www.gravatar.com/avatar/df6c864847fba9687d962cb80b482764??s=60", "author_url" : "http://twitter.com/mgonto", "tags" : "auth0", "url" : "/auth0-named-in-seattle-10-hottest-ventures-2016/", "keyword" : "bellevuewa - every year the seattle museum of history and industry partners with geekwire to present the seattle 10 - a collection of ten history-making local start-up companiesauth0 had the privilege of joining the seattle 10 class of 2016chosen over dozens of other fantastic nomineesto commemorateauth0 and the other winners recreated their business ideas on a six-foot by six-foot cocktail napkin that was unveiled on december 7 at the geekwire galathe pop-up exhibit will run through january 292017about auth0auth0 provides frictionless authentication and authorization for developersthe company makes it easy for developers to implement even the most complex identity solutions for their webmobileand internal applicationsultimatelyauth0 allows developers to control how a persons identity is used with the goal of making the internet saferas of august2016auth0 has raised over $24m from trinity venturesbessemer venture partnersk9 venturessilicon valley bankfounders co-opportland seed fund and nxtp labsand the company is further financially backed with a credit line from silicon valley bankfor more information visit https//auth0com or follow @auth0 on twitter", "image" : "https://cdn.auth0.com/blog/seattle-top10-2016/logo.png", "date" : "December 23, 2016" } , { "title" : "Personal Information Security Guide for Family and Friends", "description" : "Help your family and friends being secure with this printable security guide", "author_name" : "Prosper Otemuyiwa", "author_avatar" : "https://en.gravatar.com/avatar/1097492785caf9ffeebffeb624202d8f?s=200", "author_url" : "https://twitter.com/unicodeveloper", "tags" : "Security", "url" : "/personal-information-security-identity-guide/", "keyword" : "tldrthe digital age is upon usas much as it has blessed us and made collaboration and work easierfaster and more efficientit also has its downsidesa lot of damage can be done by someone in a remote village over the internetcybercriminals carefully exploit loopholes to steal information and perform unauthorized transactions across devices and applications that are not secured enoughthis article aims to serve as a personal information security identity guide for youfamily and friendsfollow this guide and keep your family and friends safe from cybercriminals this seasonfind the printable guide to give it to your family and friends herepersonal information security guide for family and friends pdfthe guidewe came up with aof questions that sums up the common security challenges that a lot of people experience and also provided answers that can guide you belowhow do i make sure my email is secureset up two-factor authenticationthis adds an additional step to verifying a user logging into their emaileg your email provider sends a code to your phone that you must enter into a form to successfully authenticate and gain access to your emailavoid sending confidential information such as passwords or social security numbers through emailassume links in your emails are not from a secure and reputable source eg links that lead to banking services and billing sitestype the address in the browser to go to these sites insteadhow do i secure my social network accounts and prevent them from being hackeddont use the same password across multiple online servicesuse a password manager such as lastpass or 1password to store and also generate secure passwordschange passwords frequentlyregister yourself on have i been pawned to ensure your accounts havent been hackedset up two-factor authentication if its available on the social networkis my home network securewhen setting up your wireless network at homeensure you have a very strong wpa-2wi-fi protected access 2password by following this processlog into your accountopen the wireless tab to edit your wireless settingsclick to enable wpa-2 from the dropdown optionset a strong passwordcheck out how to pick a good password on question #5how do i detect phishing emailsphishing emails are emails designed to look like legitimate messages from actual banksbusinessesand other organizationsin realitythey are crafted messages from cyber-criminals intended to steal your identitypersonal informationor moneydo not click on any links that you do not recognizeespecially if they come from an unknown sourcebetter stillassume all links from emails are phishing attemptsif its a mail from the banktype the address in the browser rather than clicking on the linkoftenphishing emails do not include your name but something generic like “dear client”watch out for such emailsphishing emailhow to pick a good passwordyour password should be at least 10 characters longit should be a combination of alphanumericspeciallower and uppercase characterscheck out this guide for more information on defining a strong passworddo i need antivirus software on my computeryesantivirus software is needed on your computerit detectspreventsand removes malicious software on your computeryou can install trusted antivirus software like kaspersky or avast on your computerwhy is it not good to share too much information about yourself and family on social media platformsif you post too much information about yourselffriends and familyan identity thief can find information about your lifeuse it to answer challenge questions on your accountsand get access to your money and personal informationmake as little content as possible publicfor exampleshare only with friendsis a good default option for facebook accountssourcehuffingtonpostcomshould i care about security while connecting to wireless networkswhile connected to public wifi networksensure that the sites you visit or submit information to are secure by ensuring that the url starts with https instead of httpenable the firewall on your computeryou can configure the application firewall on your mac by going through the apple support instructionsyou can also configure the application firewall on your windows pc by going through this instructionshow do i detect phone scamswhat are phone scamsphone scams are text messages or phone calls designed to trick you into providing sensitive information to unauthentic authoritiesmake sure you check out for typoswatch out for “too good to be true” deals sent via text messageshow to use a password managera password manager is software that helps store and organize user passwordsit is not advisable to use the same passwords across various websites and servicesthus using a password manager helps alleviate the challenge of committing complex and strong passwords to memorythere are several password managers available to usenotable ones and instructions on how to use them include 1passworddashlanelastpass and keepasswhy should i not reuse my password on every websitereusing a password on several services is a high-risk ventureif your password is compromised on one servicehackers can have access to your accounts on several services and cause lots of damagehow would i know if a website is secure to enter my credit card informationlook out for the encryption symbolpadlockin the urlverify that the site is secure by ensuring that the url starts with https instead of http before providing sensitive information to the websitein additionprefer recognized brands over unknown oneseven if the padlock icon for https is present in the address barhow do i make sure a website is reallook out for typos in the site name and urla typical example is https//wwwpaypalcoma fake version of this might be https//paypa1also look out for the encryption symbolverify that the site is secure by ensuring that the url starts with https instead of httphow to secure your mobile devicesensure that your mobile device operating systemosis always up-to-datehave a lock system for your deviceg password lockfingerprint lock or pattern lockif there is an option for “encryption” in the device settingsenable itdo not use alternate app stores eg alternatives to google play &app storewhat are your personal backup strategieswe recommend that you backup your personal information securely on a regular basisthere are several good options for automatically backing up your datasome popular and trusted options are crashplancarbonitespideroak and backblazefor mac usersyou can also use apple time machineconclusionfinallywe would like to give you a word of advice to complement these tips to ensure youyour family and friends stay safe this season and beyondkeep these things in mindand we are hopeful that you will enjoy this season with your friends and familymerry christmas and happy new year in advance", "image" : "https://cdn.auth0.com/blog/personal-info-security-guide/logo.png", "date" : "December 23, 2016" } , { "title" : "Introduction to Progressive Web Apps (Instant Loading) - Part 2", "description" : "Progressive Web Apps are the future. Learn how to make your mobile web app native-like by making it work offline and load instantly.", "author_name" : "Prosper Otemuyiwa", "author_avatar" : "https://en.gravatar.com/avatar/1097492785caf9ffeebffeb624202d8f?s=200", "author_url" : "http://twitter.com/unicodeveloper?lang=en", "tags" : "pwa", "url" : "/introduction-to-progressive-web-apps-instant-loading-part-2/", "keyword" : "tldrweb development has evolved significantly over the years allowing developers to deploy a website or web application and serve millions of people around the globe within minuteswith just a browsera user can put in a url and access a web applicationwithprogressive web appsdevelopers can deliver amazing app-like experiences to users using modern web technologiesin the first part of this tutorial we set up our progressive web appcached the pages and made it work partially offlinethis timewell make it load instantly and work offline fullyrecap and introduction to part 2in introduction to progressive web appsoffline firstwe discussed how a typical progressive web application should look like and also introduced the service workerso farve cached the application shellthe index and latest pages of our web app now load offlinethey also load faster on repeated visitstowards the end of the first part of this tutorialwe were able to load the latest page offline but couldnt get dynamic data to display when the user is offlinethis tutorial will covercaching the app data on the latest page to be displayed to the user when offlineusing localstorage to store the app dataflushing out old app data and fetching updated data when the user is connected to the internetoffline storagewhen building progressive web appsthere are various storage mechanisms to consider like soindexeddbthis is a transactional javascript based database system for client-side storage of datathis database employs the use of indexes to enable high performance searches of the data stored in itindexeddb exposes an asynchronous api that supposedly avoids blocking the dombut some research has shown that in some cases it is blockingi recommend that you use libraries when working with indexeddb because manipulating it in vanilla javascript can be very verbose and complexexamples of good libraries are localforageidb and idb-keyvalindexeddb browser supportcache apithis is best for storing url addressable resourcesworks with service worker really wellpouchdbopen source javascript database inspired by couchdbit enables applications to store data locally while offlinethen synchronize it with couchdb and compatible servers when the application is back onlinekeeping the users data in sync no matter where they next loginpouchdb supports all modern browsersusing indexeddb under the hood and falling back to websql where indexeddb isnt supportedsupported in firefox 29+including firefox os and firefox for androidchrome 30+safari 5+internet explorer 10+opera 21+android 40+ios 71+ and windows phone 8+web storage eg localstorageit is synchronous and can block the domthe usage is capped at 5mb in most browsersit has a simple api for storing data and it uses key-value pairsweb storage browser supportwebsqlthis is a relational database solution for browsersit has been deprecated and the specification is no longer maintainedsobrowsers may not support it in the futuremobile storage quotaaddy osmani has a comprehensive resource on offline storage for progressive web appsyou should really check it outaccording to pouchdb maintainernolan lawsondo well to ask yourself these questions when you are using a databaseis this database in-memory or on-diskwhat needs to be stored on diskwhat data should survive the application being closed or crashingwhat needs to be indexed in order to perform fast queriescan i use an in-memory index instead of going to diskhow should i structure my in-memory data relative to my database datawhats my strategy for mapping between the twowhat are the query needs of my appdoes a summary view really need to fetch the full dataor can it just fetch the little bit it needscan i lazy-load anythingyou can check out how to think about databases to give you a broader knowledge on the subject matterlets implement instant loadingfor our web appll use localstoragei recommend that you dont use localstorage for production apps because of the limitations i highlighted earlier in this tutorialthe app we are building is a very simple oneso localstorage will work fineopen up your js/latestjs filewe will update the fetchcommits function to store the data it fetches from the github api in localstorage like sofunction fetchcommits{ var url =https//apigithubcom/repos/unicodeveloper/resources-i-like/commitsfetchurlthenfunctionfetchresponse{ return fetchresponsejson}response{ consolelogresponse from githubvar commitdata = {}forvar i = 0i <posdatalengthi++{ commitdata[posdata[i]] = { messageresponse[i]commitmessageauthornametimedatelinkhtml_url }} localstoragesetitemcommitdatastringifycommitcontainer{ containerqueryselector+ commitcontainer[i]innerhtml =<h4>+ response[i]message +/h4>+name +time committednew datetoutcstringa href=html_url +>click me to see more/a>} appspinnersetattributehiddentrue// hide spinner }catcherror{ consolewith this piece of code aboveon first page loadthe commit data will be stored in localstoragenow lets write another function to retrieve the data from localstorage like so// get the commits data from the web storage function fetchcommitsfromlocalstoragedata{ var localdata = jsonparseapp//hide spinner for{ container+ localdata[posdata[i]]author +localdata[posdata[i]]link +} }this piece of code fetches data from localstorage and appends it to the domnowwe need a conditional to know when to call the fetchcommits and fetchcommitsfromlocalstorage functionthe updated latestjs file should look like solatestjs{use strictvar app = { spinnerdocumentloadervar container = documentcontainervar commitcontainer = [firstsecondthirdfourthfifth]var posdata = [// check that localstorage is both supported and available function storageavailabletype{ try { var storage = window[type]x =__storage_test__storagexremoveitemreturn true} catche{ return false} } // get commit data from github api function fetchcommits// hide spinner }ifstorageavailablelocalstorage{ ifgetitem=== null{ /* the user is using the app for the first timeor the user has not * saved any commit dataso show the user some fake data*/ fetchcommitsconsolefetch from api} else { fetchcommitsfromlocalstoragefetch from local storage} } else { toastwe cant cache your app data yet}}in the piece of code abovewe are checking if the browser supports localstorage and if it doeswe go ahead to check if the commit data has been cachedif it has not been cachedwe fetchdisplay and cache the apps commit datareload the browser againmake sure you do a hardclear-cachereload else we wont see the result of our code changesgo offline and load the latest pagewhat happensyaaayit loads the data without any problemcheck the devtoolsyoull see the data been stored in localstoragestore data locallyjust look at the speed at which it loads when the user is offlineomgloads from service worker when user is offlineone more thingnowwe can make the app load instantly by fetching data from the localstoragehow do we get fresh updated datawe need a way of still getting fresh data especially when the user is onlineits simples add a refresh button that triggers a request to github to get the most recent dataopen up your latesthtml file and add this code for the refresh button within the <header>tagbutton id=butrefreshclass=headerbuttonaria-label=refresh/button>so the <tag should look like this after adding the buttonspan class=header__iconsvg class=menu__icon no--selectwidth=24pxheight=viewbox=0 0 48 48fill=#fffpath d=m6 36h36v-4h6v4zm0-10h36v-4h6v4zm0-14v4h36v-4h6z/path>/svg>/span>header__title no--selectpwa - commits</header>finallys attach a click event to the button and add functionality to itopen your latestjs and add this code at the top like sogetelementbyidaddeventlistenerclick{ // get freshupdated data from github whenever you are clicked toastfetching latest datafetchcommitsgetting fresh dataclear your cache and reload the appyour latesthtml page should look like soget updated dataanytime users need the most recent datathey can just click on the refresh buttonasideeasy authentication with auth0you can use auth0 lock for your progressive web appwith lockshowing a login screen is as simple as including the auth0-lock library and then calling it in your app like so// initiating our auth0lockvar lock = new auth0lockyour_client_idyour_auth0_domain// listening for the authenticated eventlockonauthenticatedauthresult{ // use the token in authresult to getprofileand save it to localstorage lockgetprofileidtokenprofile{ // handle error return} localstorageimplementing lockdocumentbtn-login{ lockshowshowing lockauth0 lock screenin the case of an offline-first appauthenticating the user against a remote database wont be possible when network connectivity is losthoweverwith service workersyou have full control over which pages and scripts are loaded when the user is offlinethis means you can configure your offlinehtml file to display a useful message stating the user needs to regain connectivity to login again instead of displaying the lock login screenconclusionin this articlewe were able to make our app load instantly and work offlinewe were able to cache our dynamic data and serve the user the cached data when offlinein the final part of this tutorialwe will cover how to enable push notificationsadd web application manifest and our app to a users homescreen", "image" : "https://cdn.auth0.com/blog/pwa/instant_loading_Logo.png", "date" : "December 22, 2016" } , { "title" : "Extend Slack with Node.js", "description" : "Embrace the benefits of Slack extensibility with Slash Webtasks", "author_name" : "Tomasz Janczuk", "author_avatar" : "https://s.gravatar.com/avatar/53f70144dc9d7c76455fa91f858d4cec?s=200", "author_url" : "https://twitter.com/tjanczuk?lang=en", "tags" : "webtask", "url" : "/extend-slack-with-nodejs/", "keyword" : "it is 2016 and slack is the new e-mailfor many distributed teams or companies like auth0slack has become the default communication solutionyet the true power of slack goes beyond communicationslack can be extended with integrations to other systemsbeing able to perform most daily tasks from within your teams primary communication channel greatly increases productivityin this post i will show how you can easily extend slack with nodejs using slash webtasksa solution we have created at auth0 that builds on the serverless conceptsusing this approach you can automate processesrun your devopsgenerate reportsand morein a powerful yet simple and efficient waywebhooksthe good partsslack has a rich directory of ready-made appsbut it isintegrations that offer the ultimate flexibility in building team-specific solutionsusing the webhook modelyou can extend slack with arbitrary logic by writingcodewebhooks can be exposed in the slack interface as slash commands for all team members to invokethe webhook code behind a slash command can post synchronous or asynchronous messages back to slackallowing for a range of useful applicationsfor examplesystem health checks……or on-demand reporting of your kpiswhatever they may bedevelopers love webhooks because of the flexibility they offeronce you can writeonly imagination limits what you can accomplishpluswriting code is funbut…webhooksthe bad partsflexibility of the webhook model comes with strings attachedonce the code is writtenturning it into an endpoint requires finding a place to host itensuring securitymonitoringplanning for scalingavailabilityetcin other wordsit requires you to run a servicedevelopers typically utilize hosting solutions like herokuawsor windows azure to set up and maintain a service behind the webhookdue to this added costsome slack extensibility ideas on your team never see the light of dayif the perceived value of a potential slack extension is large enough to offset the cost of setting up and running a serviceit will likely be implementedhoweverhow many of those nice-to-have ideas did you have to forego because the benefit did not seem to justify the costwhat if you could enable all that innovation lurking on your team and empower anyone with a great idea and ability to code to realize it at close to zero costenter slash webtasksall you need is codewhat if you could just write code to extend slackwithout worrying about servershostingscalabilitywhat if that extension authoring experience was integrated into slack itselfunhappy with how many good ideas were not realized on our teamthese were some of the questions we started asking at auth0as a resultweve created slash webtasksslash webtasks enable you to extend slack with nodejsno serversno hostingjust codethe slash webtask experience allows any member of a slack team to use nodejs to create and run a new slack extension from within slack itselfslash webtasks enable you to go directly from a great idea to writing code and running itcutting out the layer of concerns related to operating a serviceonce we rolled out slash webtasks in our own slack team at auth0it generated an explosion of new applications and a lot of excitementall the nice-to-have ideas people had to suppress due to cost considerations before finally found an easy outletyou can install slash webtasks on your own slack team from webtaskioall the slack extensibility are belong to youwhat have people at auth0 done with the newfound powerswe have seen a number of data-reporting extensions created that present real-time kpisor allow access to tailored reports from our redshift data warehouse in awsone extension generates a summary report of key information we have about a potential customer that informs our marketing and sales activitiesmany extensions that help in devops and operations sprang up as wellwe can now quickly find out what the current health status of all systems isif systems are on fire and we need to bring a specific set of people to fix the problemwe can now send them an sms message right from slackthis slash webtask uses twilio to send texts to phone numbers associated with a specific teama redirection that allows us to easily implement servicing rotationsand nothe wakeup name is not a jokewe mean business when this is usedin addition to reducing the cost of creating extensions that are critical to our operationsseveral nice-to-have ideas were also quickly implementedone can now add new product ideas to product board without ever leaving the slack environmentyou can see how this approach could be used to file github or pivotal issues as welllastlythe technology enabled creation of a few lighthearted extensions that dont directly support our core business but help nurture auth0 culture and make auth0 a great place to worksince emoticons are so 2015 and by now weve used up all of themve devised a way to express ones feelings in a more dynamic way in the midst of a slack discussionthe bottom line is that slash webtasks allowed us to greatly reduce the friction and cost of turning an idea into realityall you need is codewho killed the serverinside slash webtasksspoiler alertit wasnt the butlerslash webtasks are running on top of the auth0 webtask technology which provides the necessary computation and isolation primitives to securely executenodejs code in a multi-tenant environment like the slack platformauth0 webtasks were created to support extensibility of the auth0 identity platform throughand have been deployed and operated at scale since 2014while the technology existed before serverless was a wordit embodies many of the same principlesthe essence of the webtask platform is to make development focused primarily on writing coderather than making servers a first class conceptthe what is serverless post describes these principles in more detailwhile dogfooding the webtask technology at auth0we realized its applicability goes well beyond our internal use casespecificallywebtasks are a great fit for any platform which uses webhooks as the extensibility mechanismslash webtasks build on top of that concept by embracing slacks webhook-based extensibility modelwhen you install slash webtasks app in your slack teama new /wt slash command is created within your teamit is associated with a singleglobal webhook endpointall requests from any of the teams that chose to install slash webtasks are processed by that single endpointthe implementation of that endpoint is a webtask itselfthis is merely an illustration of the law of the instrument rather than a critical aspect of the designthe /wtwt- slash command serves two purposesfirstit exposes a set of sub-commands that act as a management interface for creatingeditinglistingand removing individual extensionssecondit acts as a proxy for executing the extensionsseveral named slash webtask extensions can be created within a slack teameach of them is implemented as an individual webtask — a nodejs function that accepts slacks webhook payload on input and must respond with json payload that slack expectsthese webtasks are authored within the webtask editor which is an integral part of the auth0 webtask platformin addition to writing codethe webtask editor also allows you to specify secrets that the code will be provided with at runtimevia the ctxsecrets parameterthis gives you a very convenient way to pass api keys and other credentialsegtwilio api keymongodb urlinto your webtask code without having to embed them in code or rely on external servicesthis is all part of the auth0 webtask platform itselfnot specific to slash webtasksonce a slash webtask extension is createdit can be executed by anyone on the team using the /wt slash commandat this point the webhook behind the /wt command acts as a proxythe individual slash webtask extensions are executed in isolated environments to ensure they do not affect each others execution within a teamand that slash webtasks of different slack teams are completely isolated from one anotherthis isolation is achieved by running each slash webtask extension in its own webtask containera webtask container is a fundamental isolation concept supported by the auth0 webtask platformtwo webtasks running in different webtask containers are guaranteed to be isolated from one another in terms of memorynetworkdiskand cpuhow this isolation is implemented within auth0 webtasks is a topic for another postso it was not the butler who killed the serverit was auth0 scratching its own extensibility itch with webtasksand then applying the battle-tested technology to the extensibility of the slack platformwhere do you go from hereyou go to https//webtaskio/slack and install slash webtasks in your own slack teamslash webtasks helped us fully embrace the benefits of slack extensibility in driving our business at auth0and we hope you will realize similar benefits", "image" : "https://cdn.auth0.com/blog/extend-slack/logo.png", "date" : "December 21, 2016" } , { "title" : "Introduction to Progressive Web Apps (Offline First) - Part 1", "description" : "Progressive Web Apps are the future. Implement offline functionality and make your mobile web app feel like a native app.", "author_name" : "Prosper Otemuyiwa", "author_avatar" : "https://en.gravatar.com/avatar/1097492785caf9ffeebffeb624202d8f?s=200", "author_url" : "http://twitter.com/unicodeveloper?lang=en", "tags" : "pwa", "url" : "/introduction-to-progressive-apps-part-one/", "keyword" : "tldrweb development has evolved significantly over the years allowing developers to deploy a website or web application and serve millions of people around the globe within minuteswith just a browsera user can put in a url and access a web applicationwithprogressive web appsdevelopers can deliver amazing app-like experiences to users using modern web technologiesin this articleyoull get to understand how to build a progressive web app that works offlineintroduction to progressive web appsa progressive web application is basically a website built using modern web technologies but acts and feels like a mobile appin 2015alex russellgoogle engineerand frances berriman coined the term progressive web appsgoogle has been immensely working on making sure that progressive web apps can really give users that native-app like experiencethe flow of a typical progressive web app goes thusstarts out as accessible in tabs on the web browsershows the option of adding to the homescreen of the deviceprogressively starts exhibiting app-like properties such as offline usagepush notifications and background syncuntil nowmobile apps could do a lot of things that web apps couldnt really doare web apps that try to do what mobile apps have been doing for a long timethey are web applications that combine the best of the web and the best of appscan load very fast on slow network connectionswork offlinesend push notificationsand load on the home screen with the power of web app manifesta progressive web application is basically a website built using modern web technologies but acts and feels like a mobile apptweet this remember the splash screen that native apps provideright nowthe latest versions of chrome on android support the use of a splash screen to give your web app a native experienceall thanks to progressive web appssourcedevelopersgooglecomfeatures of progressive web appswhat does it mean for a web app to be progressivethis new class of web applications have characteristics that defines their existencewithout much adothese are the features of progressive web appsresponsivethe ui must fit the devices form factordesktopmobileand tabletapp-likewhen interacting with a progressive web appit should feel like a native appconnectivity independentit should work offlinevia service workersor in areas of low connectivityre-engageablethrough features like push notificationsusers should be able to consistently engage and re-use the appinstallablea user should be able to add it on their homescreen and just launch it from there whenever they need to re-use the appdiscoverableshould be identified as applications and be discoverable by search enginesfreshshould be able to serve new content in the app when the user is connected to the internetsafeshould be served via https to prevent content-tampering and man-in-the-middle attacksprogressiveregardless of the browser choiceit should work for every userlinkableeasy to share via urlproduction use cases of progressive web appsseveral developers and companies have re-developed their websites into progressive web appsill give a summary of three significant products that are progresive web apps and the benefits they have accrued over timeflipkart liteflipkart is one of indias largest online shopsthey created a progressive web appflipkart lite that resulted in a 70% increase in conversionsthey took advantage of the super-powers progressive web apps offer by using service workerspush notificationsadd to home screensplash screenand smooth animations and it resulted in the following3x less data usage40% higher re-engagement rateusers spend more time on the platform70% conversion ratestatsgoogle pwa showcaseflipkart splashscreenadd to homescreen on flipkartmore information on the case study herehousinghousingcom is one of indias foremost startupsthey provide an online real estate platform in indiathey created a progressive web app which resulted in a 38% increase in conversions across browsers and also the following40% lower bounce rate10% longer average session30% faster page loadstatsgoogle pwa showcaseadd to homescreen on housingoption to turn on push notificationsmore information on the case study herealiexpressaliexpressthe very popular global online retail marketplace had the challenge of getting users to download their mobile app and re-engage as much as they wantedto solve this challengethey decided to created a progressive web app for their mobile web users and the results were very impressive104% increase in conversion rate for new users74% increase in time spent per session across all browsers2x more pages visited per session per user across all browsersstatsgoogle pwa showcasealiexpress mobile navigationaliexpress mobile homepagemore information on the case study herethese companies have benefitted immensely from deploying progressive web appsnextlets dive in further into one of the major components that makes up what we call a progressive web appservice workersservice workersa service worker is a programmable proxya scriptthat your browser runs in the backgroundit has the ability to intercepthandle http requests and also respond to them in various waysit responds to network requestsconnectivity changes and many morejeff posnicka google engineergave one of the best explanation that i have seenservice worker is an air traffic controllerthink of your web apps requests as planes taking offservice worker is the air traffic controller that routes the requestsit can load from the network or even off the cachea service worker cant access the dom but it can make use of the fetch and cache apisyou can use the service worker to cache all static resourceswhich automatically reduces network requests and improve performancethe service worker can be used to display the application shellinform users that they are disconnected from the internet and serve up a page for the user to interact with once they are offlinea service worker fileeg swjs needs to be placed in the root directory like soservice worker file in the root directoryto get started with service workers in your progressive web appyou need to register the service worker in your apps js fileif your applications js file was appjsthen inside the filewell have a piece of javascript code like soifserviceworkerin navigator{ navigatorregister/swthenfunction{ consolelogservice worker registered}}the piece of code above checks if the browser supports service workersand if it doesregisters the service worker fileonce the service worker is registeredwe start to experience its lifecycle the moment a user visits the page for the first timethe service workers life cycle goes thusinstallan install event is triggered the first time a user visits the pageduring this phasethe service worker is installed in the browserduring this installationyou can cache all the static assets in your web app like so// install service workerselfaddeventlistenerevent{ consoleservice workerinstallingwaituntil// open the cache cachesopencachenamecache{ consolecaching app shell at the moment// add files to the cache return cacheaddallfilestocachethe filestocache variable represents an array of all the files you want to cachethe cachename refers to the name given to the cache storeactivatethis event is fired when the service worker starts up// fired when the service worker starts upselfactivateactivatingcacheskeyscachenames{ return promiseallmapkey{ if== cachename{ consoleremoving old cachereturn cachesdelete} }return selfclientsclaimhere the service worker updates its cache whenever any of the app shell files changefetchthis event helps serve the app shell from the cachematchdissects the web request that triggered the eventand checks to see if its available in the cacheit then either responds with the cached versionor uses fetch to get a copy from the networkthe response is returned to the web page with erespondwithselfrequesturlconsoleresponse{ return responseat this time of writingservice workers are supported by chromeopera and firefoxsafari and edge dont support it yetservice workerthe service worker specification and primer are very useful resources for learning more about service workersapplication shellearlier in the posti made mention of the app shell at various timesthe application shell is the minimal htmlcss and javascript powering the user interface of your appa progressive web app ensures that the application shell is cached to ensure fast and instant loading on repeated visits to the appapplication shellwhat we will be buildingwell build a simple progressive web appthe app simply tracks the latest commits from a particular open source projectas a progressive web appit shoulda user should be able to view the latest commits without an internet connectionthe app should load instantly on repeated visitsonce the push notification button is turned onthe user should get a notification on the latest commits to the open source projectbe installableadded to the homescreenhave a web application manifesttalk is cheaps buildcreate an indexhtml and latesthtml file in your code directory like so<doctype html>html>head>meta charset=utf-8>meta http-equiv=x-ua-compatiblecontent=ie=edgemeta name=viewportwidth=device-widthinitial-scale=10title>commits pwa</title>link rel=stylesheettype=text/csshref=css/stylecss/head>body>div class=app app__layoutheader>span class=header__iconsvg class=menu__icon no--selectwidth=24pxheight=viewbox=0 0 48 48fill=#fffpath d=m6 36h36v-4h6v4zm0-10h36v-4h6v4zm0-14v4h36v-4h6z/path>/svg>/span>header__title no--selectpwa - home</header>menumenu__header/div>ul class=menu__listli>a href=indexhtmlhome</a>/li>latestlatest<menu__overlayapp__contentsection class=sectionh3>stay up to date with r-i-l </h3>img class=profile-picsrc=/images/bookspngalt=helloworldp class=home-notelatest commits on resources i like/p>/section>fab fab__pushfab__ripplefab__image/images/push-offpush notification/>-- toast msgs -->toast__containerscript src=/js/app/script>/js/toast/js/offline/js/menu/body>/html>html<pwa - commits</ul>card_containerh2 style=margin-top70pxalign=centerlatest commits/h2>containercard firstcard secondcard thirdcard fourthcard fifthloadersvg viewbox=0 0 32 3232circle id=spinnercx=16cy=r=14none/circle>/js/latestcreate a css folder in your directory and grab the stylecss file from herecreate a js folder in your directory and add the following filesappofflinetoast{use strictvar header = documentqueryselectorheadervar menuheader = document//after dom loaded documentdomcontentloaded{ //on initial load to check connectivity ifnavigatoronline{ updatenetworkstatus} windowupdatenetworkstatusfalsewindow//to update network status function updatenetworkstatus{ if{ headerclasslistremoveapp__offlinemenuheaderstylebackground =#1e88e5} else { toastyou are now offlineadd#9e9e9e} }}jsthe code above helps the user visually differentiate offline from onlinevar menuiconelement = documentvar menuelement = documentvar menuoverlayelement = document//menu click event menuiconelementclickshowmenumenuoverlayelementhidemenumenuelementtransitionendontransitionend//to show menu function showmenu{ menuelementtransform =translatexmenu--showmenu__overlay--show} //to hide menu function hidemenu-110%} var touchstartpointtouchmovepoint/*swipe from edge to open menu*/ //`touchstart` event to find where user start the touch documentbodytouchstart{ touchstartpoint = eventchangedtouches[0]pagextouchmovepoint = touchstartpoint//`touchmove` event to determine user touch movement documenttouchmove{ touchmovepoint = eventtouches[0]touchstartpoint <10 &&touchmovepoint >30{ menuelement} }function ontransitionend10{ menuelementremoveeventlistenerjsthe code above is responsible for the animation of the menu ellipsis buttonexportsvar toastcontainer = document//to show notification function toastmsgoptionsreturnoptions = options3000var toastmsg = documentcreateelementdivtoastmsgclassname =toast__msgtextcontent = msgtoastcontainerappendchild//show toast for 3secs and hide it settimeout{ toastmsgtoast__msg--hide//remove the element after hiding toastmsg{ eventtargetparentnoderemovechild} exportstoast = toast//make this method available in global}typeof window ===undefinedmodulethe code above is responsible for app-like toast notification pop-up timed widgetthe latestjs and appjs can be empty for nownowspin up your app using a local servereg http-serveryour web app should look like thisside menuindex pagelatest pageapplication shellyour application shell is also highlighted aboveno dynamic content loaded yetwe need to fetch commits from githubs apifetch dynamic contentopen up your latestjs file and add the code belowvar app = { spinnerdocumentvar container = document// get commit data from github api function fetchcommits{ var url =https//apigithubcom/repos/unicodeveloper/resources-i-like/commitsfetchresponse{ return fetchresponsejson{ var commitdata = {first{ messageresponse[0]commitmessageauthornametimedatelinkhtml_url }secondresponse[1]thirdresponse[2]fourthresponse[3]fifthresponse[4]html_url } }innerhtml =h4>+ response[0]message +/h4>+name +time committednew datetoutcstringhtml_url +click me to see more+ response[1]+ response[2]+ response[3]+ response[4]setattributehiddentrue//hide spinner }catcherror{ consolefetchcommitsin additionreference the latestjs script in your latesthtml file like soscript src="js"alsoadd the spinner to your latestdiv class="loader"svg viewbox="0 0 32 32"width="32"height="circle id="spinner"cx="16"cy="r="14"fill="none"toast__container"in the latestjs codeyou can observe that we are fetching the commits from githubs api and appending them to the domnow our latesthtml page should look like thishtml pageprecache the app shell with service workerswe need to cache our app shell using a service worker to ensure our app loads super-fast and work offlinecreate a service worker file in your root directoryname it swjssecondopen up your appjs file and register the service worker by adding this piece of code like so{ navigator}open the swjs file and add this piece of code like sovar cachename =pwa-commits-v3var filestocache = [//css/style/images/homesvg/images/ic_refresh_white_24px/images/profile/images/push-on]like i explained in the earlier part of this postall our assets are in filestocache arrayas the service worker gets installedit opens the cache in the browser and adds all the files we defined in the array to the pwa-commits-v3 cachethe install event fires once the service worker is already installedthis phase ensures that your service worker updates its cache whenever any of the app shell files changethe fetch event phase serves the app shell from the cachenotecheck out googles sw-toolbox and sw-precachelibraries for easier and better way of precaching your assets and generating service workersnow reload your web app and open devtoolsgo the service worker pane on the application tabensureyou enable the update on reload checkbox to force the service worker to update on every page reloadworks offline or notreload your page and then go to the cache storage pane on the application tab of chrome devtoolsexpand the section and you should see the name of our app shell cache listed on the left-hand side like socache storagewhen you click on your app shell cache you can see all of the resources that it has currently cacheds test out its offline capability nowhead over to the service worker pane again and tick the offline checkboxa small yellow warning icon should appear next to the network tab like sooffline network tab in chrome dev toolsnowreload your web page and check it outdoes it work offlineindex page offlineyaaaythe index page is served offlinewhat about the latest page that shows the latest commitslatest page offlineyaaaythe latest page is served offlinebut wait a minutewhere is the datawhere are the commitsoopsour app still tries to query the github api when the user is disconnected from the internet and it failsdata fetch failurechrome devtoolswhat do we dothere are different ways to handle this scenarioone of the many options is telling the service worker to serve up an offline pageanother option is to cache the commit data on first loadload locally-saved data on subsequent requeststhen fetch recent data later when the user is connectedthe commit data can be stored in indexeddb or local storagewells conclude here for nowasideeasy authentication with auth0you can use auth0 lock for your progressive web appwith lockshowing a login screen is as simple as including the auth0-lock library and then calling it in your app like so// initiating our auth0lockvar lock = new auth0lockyour_client_idyour_auth0_domain// listening for the authenticated eventlockonauthenticatedauthresult{ // use the token in authresult to getprofileand save it to localstorage lockgetprofileidtokenprofile{ // handle error return} localstoragesetitemlocalstoragestringifyimplementing lockdocumentgetelementbyidbtn-login{ lockshowshowing lockauth0 lock screenin the case of an offline-first appauthenticating the user against a remote database wont be possible when network connectivity is losthoweverwith service workersyou have full control over which pages and scripts are loaded when the user is offlinethis means you can configure your offlinehtml file to display a useful message stating the user needs to regain connectivity to login again instead of displaying the lock login screenconclusionin this articlewe were able to cover the basics of how progressive web apps work in generalwe were also able to make our app partially work offlinein the next part of this tutorialwe will cover how to make our app fully work offline and load instantly by storing the dynamic commit data in the browser using one of its available form of storage", "image" : "https://cdn.auth0.com/blog/pwa/offline-first-Logo.png", "date" : "December 19, 2016" } , { "title" : "Announcing the Auth0 Authorization v2 Extension!", "description" : "Introducing the new version of our Authorization Extension which adds support for roles and permissions", "author_name" : "Sandrino Di Mattia", "author_avatar" : "https://s.gravatar.com/avatar/e8a46264ec428f6b37018e1b962b893a.png", "author_url" : "https://www.twitter.com/sandrinodm", "tags" : "auth0", "url" : "/announcing-authorization-extension-v2/", "keyword" : "backgroundauthentication has always been the core of auth0but once you know who your users areyoull probably also need to know what they can dothereforeafter authenticating usersll need to authorize themthere are so many different ways to do thisfrom complex products to home-grown solutionsusing our rules engineyou could also consume this information during the login processconceptsthe authorization extension which you can now install using the extensions tab in the dashboardtries to provide customers with a generic approach to managing authorization using three top-level conceptsgroupsrolespermissionsgroupsgroups are collections of usersthey are a common way to organize users in enterprise directories like active directorya company might create a group for every departmentsuch as hrfinanceaccountingand itusers can be added to one or more groupsbut groups can also be nestedwith members of one group automatically added as members of another groupthe main reason for having groups is that it allows us to group people who have the same profile within the companyit is easier to assign roles and permissions to groups than it is to individual peoplewho can can get sickgo on vacation or leave the companypermissions and roleswhile groups are bound to an organization and not an applicationthe same is not true for roles and permissionsif you look at an application that you are buildingll notice that users can do many things within your applicationeverything your users can do are actionsincluding opening a recordupdating onedeleting onereportingand changing settingsa permission determines whether you are allowed to execute an action or notsuch asreadusersrunreportsupdatesettingsthese permissions only make sense within the applicationa generateinvoice permission might make a lot of sense in your accounting application but no sense at all in your planning toolpermissions represent actions that you can execute as a user within an applicationand roles are there to group these permissions into logical collectionsa timesheet application can have a timesheet user role and a timesheet manager rolea user will have certain permissionssuch as readtimesheets updatetimesheetsand createwhile a manager will have additional permissionssuch as approvetimesheet and rejecttimesheetthese roles can be assigned to specific users or to groupsif roles are assigned to a groupevery user of that group will receive those rolesand permissionsconsuming this information in your applicationsafter setting up everything in the extensionyour applications will need to consume this informationthe extension offers three ways to do thisadding information to the id_tokenadding information to the users app_metadatagetting the information from a policy decision pointpdpin the configuration sectionyou can configure the behavior of the extensionany change you make here will deploy a rule to your auth0 accountwhich will add the information to the tokenthe app_metadataor bothif youre implementing rbacfor exampleyou could just check the box to store the roles in the tokenbut if you need to understand which permissions a user has in a specific application lateryou could make the following call to get the users authorization data in the context of an applicationpost https//sandrino-devuswebtaskio/api/users/adjohn@fabrikamcom/policy/9cdfqbunb9zvyrcpfwjlzph9tuwclgiothis will return the groupsrolesand permissions for a user in the context of the current application{groups[distribution]permissionsown-receiptsupdatedelete-receiptsown-reportsdeletesubmitreceiptsreportsapproverejectexpense userexpense manager]}feedbackthis extension was built using reactreduxhapiand webtaskwhen you install itthe entire application will run in the webtask container of your auth0 accountthe full source code is available at https//githubcom/auth0/auth0-authorization-extensionfeel free to open a github issue if you have any feedbackwhat lies aheadour api authorization feature has been in public preview for some time nowand it will be interesting to use the authorization extension in that contextdepending on your use caseyour permissions can be represented as scopes in your access tokenhere are some additional resources to get you startedofficial documentationautomatically provisioning groups", "image" : "https://cdn.auth0.com/blog/auth-extensions-v2/logo.png", "date" : "December 14, 2016" } , { "title" : "Adding Authentication to Shiny Server in 4 Simple Steps", "description" : "Learn how to add authentication to your free Shiny Server setup and secure your interactive R apps!", "author_name" : "Sebastián Peyrott", "author_avatar" : "https://en.gravatar.com/userimage/92476393/001c9ddc5ceb9829b6aaf24f5d28502a.png?size=200", "author_url" : "https://twitter.com/speyrott?lang=en", "tags" : "shiny", "url" : "/adding-authentication-to-shiny-server/", "keyword" : "shiny server is a great tool to create visualizations and interactive documents for your r applicationsit is also very popularunfortunatelythe free version of shiny server does not support any form of authentication whatsoeverthis precludes many common use cases such as taking your apps onlineor limiting access to certain users inside your networkin this article we will show you how to add authentication to the free version of shiny server using auth0read onwe show you how to add authentication to the free version of shiny servertweet this introductionso you know shiny serverif notask your closest data scientist and watch him or her drooldata scientists love to turn their powerful r analyses into visualinteractive applicationsand shiny is just the right tool for thattake a look at some of the demos in the product pageniftyhuhwelltheres a catchshiny server is currently available in two versionsan open-sourcelimited editionand a full-blownproeditionfortunatelyfor many use casesthe open-source edition is more than enoughbut shiny is a web application and two very important things for any web app are missing from the open-source editionssl/tls support and authenticationin other wordsusing the open-source edition for public facing apps or internal apps that require at least some access control is a no-goa while ago we explored the alternative of using an apache server as a reverse-proxy for shiny with an authentication moduleauth_openidcwhile this worked most of the timethere were two problems with this approachwebsockets support was not availableit is used internally by shiny for better user experienceand connections timed-out after a certain amount of timehowevernot everything is bad about this approachwe just need to power it up a bitso our own data scientistpablo seibelttook it upon himself to come up with a working solutionshiny-auth0shiny-auth0 is a simple reverse proxy with authenticationtuned-up for shiny serverit runs on nodejs and makes use of auth0through passportjsfor authenticationand http-proxy for full-blown proxy supportit is designed to run behind a fast nginx reverse-proxywhich can be found in most production environmentsshiny-auth0 makes it a breeze to get authentication working with shiny server without getting your hands too dirtysolets get to workstep 1get shiny server up and runningif you already have a working shiny server setup with your appsyou can probably skip this stepfor the purposes of giving a full working solutionin this step we will show you how to get a sample r app running on shiny serverand how to find out the details we need to know about it for the next stepshintits ip address and portshiny runs on linux serverswe will assume a fairly common centos 7 / red hat enterprise linux 7 setupif you are using other distrosread the official shiny docs to perform the installationlogin to the console as root and type the following commandsa word of cautionif you are not comfortable using linuxask a sysadmin to install shiny server for youhe or she can use these stepsor follow the installation guide from the official shiny docs# enable the epel repositoryextra packages for enterprise linuxsudo yum install epel-release# install rsudo yum install r# run r as root sudo rthe following commands must be input inside the r shellinstallpackagesdigest# install the r shiny packageinstallshinyrepos=https//cranrstudiocom/# quit the shellanswernwhen asked to savequitnow back in the command shellrun# download shiny servercurl -o https//download3org/centos59/x86_64/shiny-server-151834-rh5-x86_64rpm# install itsudo yum install --nogpgcheck shiny-server-1rpm# start it using systemdit is already setup to run automatically during bootsudo systemctl start shiny-servershiny server should now be active and runningby defaultshiny runs on port 3838to check itopen a browser window and point it tohttp//localhost3838 on the same computer where you installed itif you dont have access to a browser in that computerfind its ip addressip addrthen use a browser in a different computer and point it to http//your-ip-address3838if the computer running shiny has a firewall setupyou will need to consult with your system administrator for the proper steps to access shiny serverstep 2get nginx up and runningnginx is a powerful and popular http serverit supports a ton of features and is very fastwe will use nginx to perform ssl/tls terminationnginx will act as the public facing serverwith full tls supporta must for secure connectionsit will then forward all requests to our internal shiny-auth0 proxy serverwhich will run without tls in our internal networkconsidered safeour sample nginx configuration file looks as followsevents {}http { map $http_upgrade $connection_upgrade { default upgradeclose} # listen on port 80 and redirect all requests to the # tls enabled serverport 443server { listen *80# your hostname should go here server_name shinyyourhostcomaccess_log offlocation / { rewrite ^ https//$host$request_uripermanent} } # tls enabled server server { listen 443 ssl# your hostname should go here server_name shiny# tls/ssl certificates for your secure server should go here# if you dont have a tls certificateyou can get a free one by # following the free pdf available in this link# https//auth0com/blog/using-https/ ssl_certificate localtestserver-dot-compemssl_certificate_key localtestserver-dot-com-key# to enhance securityas long as you dont need to support older browsers #and you probably dontyou should only enable the most secure # ciphers and algorithmsthis is a sane selectionssl_ciphersecdhe-rsa-aes128-gcm-sha256ecdhe-ecdsa-aes128-gcm-sha256ecdhe-rsa-aes256-gcm-sha384ecdhe-ecdsa-aes256-gcm-sha384dhe-rsa-aes128-gcm-sha256dhe-dss-aes128-gcm-sha256kedh+aesgcmecdhe-rsa-aes128-sha256ecdhe-ecdsa-aes128-sha256ecdhe-rsa-aes128-shaecdhe-ecdsa-aes128-shaecdhe-rsa-aes256-sha384ecdhe-ecdsa-aes256-sha384ecdhe-rsa-aes256-shaecdhe-ecdsa-aes256-shadhe-rsa-aes128-sha256dhe-rsa-aes128-shadhe-dss-aes128-sha256dhe-rsa-aes256-sha256dhe-dss-aes256-shadhe-rsa-aes256-shaaes128-gcm-sha256aes256-gcm-sha384aes128aes256aesdes-cbc3-shahighanullenullexportdesrc4md5pskssl_protocols tlsv1 tlsv11 tlsv12ssl_prefer_server_ciphers onssl_session_cache builtin1000 sharedssl10mssl_stapling on# requires nginx >= 137 ssl_stapling_verify on# requires nginx =>7 # this proxies requests to our shiny-auth0 authentication proxy# requests are passed in plain httpso tls termination # is applied at this pointlocation / { proxy_set_header host $host# this points to our shiny-auth0 authentication proxy# change localhost3000 to suit the configuration of # your shiny-auth0 config proxy_pass http3000proxy_redirect http3000/ $scheme//$host/proxy_http_version 1# the following lines enable websockets proxyingdo not remove them # as they are used by shiny server to improve user experience proxy_set_header upgrade $http_upgradeproxy_set_header connection $connection_upgradeproxy_connect_timeout 7dproxy_send_timeout 7dproxy_read_timeout 7d} }}the important part is near the bottomtake a look at the last location / blockthis block tells nginx to handle all requestsinside this block you will find two directivesproxy_pass and proxy_redirectthese directives tell nginx to proxy requests to the host passed as parameter to themthis is were you should edit the configuration file to point it to your shiny-auth0 authentication serverwhich we will setup later on in this guideother important directives in this configuration file are ssl_certificate and ssl_certificate_keythese directives point nginx to your tls/ssl certificatesthese certificates are used to secure the connection to the serveryou must set a valid certificate and a private key hereas tls must be enabled to properly secure your shiny server installationif you want to learn more about tls/sslor find out how to get your own free tls certificatehead over to our using https articleyou can also ask your system administrator to perform these steps for youit is also possible to use a self-signed certificateif only certain clients need access to the serverand can install your certificate in their browserslast but not leastyou should change both server_name directives to use the right name for your hostthis is of particular importance if several hosts are being served by the same nginx configurationif in doubt about what this meansconsult with your system administratorin most installationsthe system-wide nginx configuration file is located at /etc/nginx/nginxconfstep 3setting up and auth0 account for shiny-auth0since authentication will be handled by auth0a free auth0 account is required to work with shiny-auth0dont panicits as simple as signing-up and setting a few knobs here and theres take a lookfirsthead over to httpscom and signupfollow the steps to fill in your detailsfor simple use casesa free account is more than enoughwith a free account you get up to 7000 usersif you need more than thatcheck our pricing pageafter you have completed the signup processaccess the auth0 dashboard and create a new client for our shiny-auth0 appthis client will let you setup how your users will log-in through shiny-auth0you have several options you must considerwill you use a standard username/password databaseor will you allow social loginsthrough facebook or googlefor exampleit is up to you to decide what fits best your use casefor simplicitywe will go with a simple social login through googlewe will only allow certain users access to our shiny serverto create a client go to client on the sidebar and then create client on the top right of the screenpick a name and then select the type of clientselect regular web applicationsignore the quickstart that is presented after that and go straight to settingstake note of the client iddomain and the client secretyou will need these later to setup shiny-auth0another important setting is the allowed callback urls setting visible belowthis is the url the user will be redirected to after a successful authentication attemptit is formed by the domain of your public server plus the callback pathfor instance//shinycom/callbacklimit logins to only certain usershaving a login screen anyone can access use to login after creating a user is usually not of much useyou may want to allow users whose email domain is the domain of your organizationto customize which users can login we can use rulesfor our examplewe will set a simple domain whitelistgo to the auth0 dashboard and pick rules from the sidebarthen pick create rule on the top right corner of the screenchoose email domain whitelist from the access control sectionthis rule is simple enough you will have no trouble understanding itfunctionusercontextcallback{ var whitelist = [exampleorg]//authorized domains var userhasaccess = whitelistsomedomain{ var emailsplit = useremailsplit@return emailsplit[emailsplitlength - 1]tolowercase=== domain}ifuserhasaccess{ return callbacknew unauthorizederroraccess denied} return callbacknull}users whose email addresses have one of the domains in the whitelist array are allowed to loginsimple as thatdo note that rules apply to all auth0 clientsthat ismultiple applicationsfrom your accountyou can filter which applicationsor even connectionsa certain rule applies toread more on rules in the docsstep 4setting up shiny-auth0 for shiny server authenticationfinally well get to see everything working togetheronce this step is done youll have a fully secured shiny serverclone the latest version of shiny-auth0 to the system that will run the authentication proxygit clone git@githubauth0/shiny-auth0gitmake sure you have an up-to-date nodejs installationif in doubtnow install all required dependencies for shiny-auth0cd shiny-auth0npm installif everything went wellall dependencies for running shiny-auth0 are now locally installednowwe will setup shiny-auth0create a new file namedenv inside the shiny-auth0 directory with the following contentauth0_client_secret=mycoolsecretauth0_client_id=mycoolclientidauth0_domain=mycooldomainauth0_callback_url=httpscom/callbackcookie_secret=somethingrandomherepleaseshiny_host=localhostshiny_port=3838port=3000you will see several common names hereas you can imagineauth0_client_secretauth0_client_id and auth0_domain are the client settings we took note in step 3proceed to fill these in hereauth0_callback_url depends on the actual url you will use to access your shiny server from outsideit is the url the user will be redirected to after authenticationthis should be one of the allowed callback urls from step 3it is very important to leave the trailing /callback part of the url in placewhatever the name of your host iscookie_secret should be a fairly long random string that should be kept secretthis secret is used to validate the cookie stored client sideput a longrandom string hereshiny_host and shiny_port are the actual host and port for your running shiny server installation from step 1if everything is running on the same serverthe defaults should be oklocalhost and port 3838lastlyport is the port where the shiny-auth0 authentication proxy will runthis port is the port that should be set in the proxy_pass and proxy_redirect directives from step 2if shiny-auth0 will run on a different host from nginxt forget to update the localhost part of these directives in nginxconf as wellwere almost thereif you have reached this pointmake sure everything is up and running# run the following command in the host for shiny serversudo systemctl start shiny-server# run the following command in the host for nginxsudo systemctl start nginx# run the following command in the host for shiny-auth0# inside the shiny-auth0 foldernode bin/wwweverything is upnow test that everything is running as it should from a different computerattempt to access your shiny host from a browser as setup in the nginx configurationthe server_name directiveoptionalsetting up autostartif you are not getting much help from your system administratorsthe missing piece of the puzzle is to get shiny-auth0 to start automatically on each bootsome distributions have their own startup systemsso covering every variation in this post is out of scopemany linux distributions are converging on systemd for daemon managementso well setup a simple systemd service file for our shiny-auth0 servermake sure shiny server and nginx are setup to start automatically as wellsudo systemctl enable shiny-serversudo systemctl enable nginxnowt take a look at a sample systemd service file for shiny-auth0[service]execstart=/usr/bin/node /home/shiny-auth0/shiny-auth0/bin/wwwrestart=alwaysstandardoutput=syslogstandarderror=syslogsyslogidentifier=shiny-auth0user=shiny-auth0group=shiny-auth0environment=node_env=production[install]wantedby=multi-usertargetsave this file as /etc/systemd/system/shiny-auth0serviceyou may have notices we created a specific user to run this applicationthis is a common practice for services that do not require root permissionsby running the service as a usereven if the service is compromisedthe attacker has limited access to the serverunless he or she can deploy an unpatched privilege escalation exploitif you do want to run the service as rootremove the user and group directives from the fileremeber to set the right path to your local copy of shiny-auth0 in the execstart directiveyou can now make shiny-auth0 start automatically during boot# enable shiny-auth0 autostart during bootsudo systemctl enable shiny-auth0# to start it nowwithout rebootingsudo systemctl start shiny-auth0 if you need help with any of thisask your local sysadminif you have succeeded in running shiny server with auth0 by following the guide aboveyour local system administrator will not have any problems making the necessary changes to have this run in the appropriate serverswith automatic start on bootconclusionshiny server is a great tool to visualize data using rin spite of its limitationsthe open-source version is really powerfultls/ssl support and authentication are essential for user facing appssometimes even inside private networksusing auth0shiny-auth0 and nginx makes adding authentication and tls support to shiny server open source edition a breezeeven for people not versed in the arcana of unix commands or programmingleave us your thoughts in the comments section belowcheers", "image" : "https://cdn.auth0.com/blog/shiny-server-2/logo.png", "date" : "December 13, 2016" } , { "title" : "Could your iPhone get stolen while it's unlocked?", "description" : "How to prevent criminals to mess with your accounts", "author_name" : "Eugene Kogan", "author_avatar" : "https://s.gravatar.com/avatar/667b1c82b6cc2241ff176d50c65da603?s=200", "author_url" : "https://twitter.com/eugk", "tags" : "security", "url" : "/could-your-iphone-get-stolen-while-it-is-unlocked/", "keyword" : "yesdefinitelyits happened to a number of people i work withimagine youre walking down the streetchecking slack or facebookand someone on a bike rides by and grabs your phoneguess whatyour phone is unlocked and ready for the thief to start using itthe same thing can happen in a barthe subwayor any other crowded placeeven scotland yard has started using this techniquebut thats another matterthe average iphone thief doesnt care about your datahe just wants to wipe the phone so he can resell itthis kind of crime is especially prevalent in countries like argentinawhere theres a large black market for apple productsyou might be thinking that the thief still needs to disable find my iphone to release apples activation lockthats truebut if the phone itself is already unlockedall he needs is your icloud passworddo you get your email on your phoneis it the same email address thats tied to your icloud accountwellguess where icloud password reset emails used to gobefore a recent update to iosit was possible to reset someones icloud password directly from their unlocked phonesimply by virtue having access to their email accounteven if you use apples two-factor authenticationwhich you absolutely shouldthat works over sms or push notificationswhich also go to your phonepreviousthere was nothing blocking a thief from stealing your phone while it was in useand unlockedresetting your icloud password via your email accountdisabling find my iphoneand finally wiping it to be resoldthankfullyapple appears to have quietly fixed this loophole by making it more difficult to reset an icloud passwordif you attempt it from your phoneeven via safariit will ask you for your unlock pini used to recommend taking an additional precaution to mitigate this riskbut its not really necessary anymorealthough it cant hurtif you follow these stepsthe icloud account settings will be completely unavailable until you disable “restrictions” with an extra pin that youll set belowgo to settings > general > restrictions > enable restrictions > [create a pinenter it twice] > accounts > dont allow changesnow that apple has added the necessary security checks to the icloud password reset processmy recommendations are simplerenable two-factor authentication for icloudand choose a hard to guess6-digitor longerpin on your iphonewith touch idyou rarely have to type it anywaykudos to apple for making this improvementi believe it will help keep people safer by discouraging iphone muggingsnowif youre a bit more paranoid and want to really limit what someone can access if they snatch your phone while youre in the middle of using itthere is one more thing you can doan engineer on my team at auth0 told me about guided accessan ios feature that lets you temporarily restrict the phone to a single appand can even limit the actions available to the user within an appwith guided access configuredsettings > general > accessibility > guided accessyou can press the home button three times to enable it at any timewithin any apponce its onyoull need the guided access pin or touch id to exit the apps that easyre reading the news on your iphone on the subwayyou can quickly enable guided accessand lock it down to that one app onlythen even if someone grabs your unlocked phonetheyll be stuck in the news app foreverno one wants to read the news for too long nowadayssurethe thief can always power it offbut then the phone will be really locked and essentially useless to themeven for resellingguided access is intended for situations like when parents let their kids play with their iphonesor ipadss actually quite powerfulyou could even use it if you think scotland yard might be after you…", "image" : "https://cdn.auth0.com/blog/could-your-iphone-get-stolen/logo.png", "date" : "December 09, 2016" } , { "title" : "Machine Learning for Everyone", "description" : "Learn the basics of predictive modeling behind one of the most-used machine learning models", "author_name" : "Pablo Casas", "author_avatar" : "https://s.gravatar.com/avatar/759facc84628c0cc0746d347f217218e?s=80", "author_url" : "https://twitter.com/datasciheroes", "tags" : "r", "url" : "/machine-learning-for-everyone/", "keyword" : "we all know that machine learning is about handling databut it also can be seen asthe art of finding order in data by browsing its inner informationsome background on predictive modelsthere are several types of predictive modelsthese models usually have several input columns and one target or outcome columnwhich is the variable to be predictedso basicallya model performs mapping between inputs and an outputfinding-mysteriouslysometimes-the relationships between the input variables in order to predict any other variableas you may noticeit has some commonalities with a human being who reads the environment =>processes the information =>and performs a certain actionso what is this post aboutits about becoming familiar with one of the most-used predictive modelsrandom forestofficial algorithm siteimplemented in rone of the most-used models due to its simplicity in tuning and robustness across many different types of dataif youve never done a predictive model before and you want tothis may be a good starting pointdont get lost in the forestthe basic idea behind it is to build hundreds or even thousands of simple and less-robust modelsaka decision treesin order to have a less-biased modelbut howeverytinybranch of these decision tree models will see just part of the whole data to produce their humble predictionsso the final decision produced by the random forest model is the result of voting by all the decision treesjust like democracyand what is a decision treeyoure already familiar with decision tree outputsthey produce if-then rulessuch asif the user has more than five visitshe or she will probably use the appputting all togetherif a random forest has three treesbut normally 500-plusand a new customer arrivesthen the prediction whether said customer will buy a certain product will beyesiftwo treespredicthaving hundreds of opinions -decision trees- tends to produce a more accurate result on average -random forest-but dont panicall of the above is encapsulated in the data scientistwith this modelyou will not be able to easily know how the model comes to assign a high or low probability to each input caseit acts more like a black boxsimilar to what is used for deep learning with neural networkswhere every neuron contributes to the wholenext postwill contain an example -based on real data- of how random forest orders the customers according to their likelihood of matching certain business conditionalsoit will map around 20 variables into only twothereforeit can be seen by the analystwhat language is convenient for learning machine learningauth0 mainly uses r software to create predictive models as well as other data processesfor examplefinding relationships between app featureswhich impacts the engineering areafinding anomalies or abnormal behaviorwhich leads to the development of anomaly detection featuresimproving web browsing docsbased on markov chainslikelihood of visiting page b being on page creducing times for answering support tickets using deep learningnot with r but with kerasif you want to develop your own data science projectsyou could start with rit has a enormous community from which you can learnand teachs not always just a matter of complex algorithmsbut also about having support when things dont go as expectedand this occurs often when youre doing new thingsfinallysome numbers about community supportdespite the fact that rand python with pandas and numpyhas lots of packageslibrariesfree booksand free coursescheck these metricsthere are more than 160000 questions in stackoverflowcomand another ~15000 in statsstackexchangecom are tagged with r", "image" : "https://cdn.auth0.com/blog/machine-learning-for-everyone/logo.png", "date" : "December 06, 2016" } , { "title" : "How SAML Authentication Works", "description" : "Learn the nitty-gritty of SAML Authentication", "author_name" : "Prosper Otemuyiwa", "author_avatar" : "https://en.gravatar.com/avatar/1097492785caf9ffeebffeb624202d8f?s=200", "author_url" : "http://twitter.com/unicodeveloper?lang=en", "tags" : "saml", "url" : "/how-saml-authentication-works/", "keyword" : "tldruser authentication is an integral part of most applicationssystemsand the need for different forms and protocols of authentication has increasedone protocol is samland in this articleyoull get to understand how it workswhat is samlsecurity assertion markup languagesamlis an xml-based framework for authentication and authorization between two entitiesa service provider and an identity providerthe service provider agrees to trust the identity provider to authenticate usersin returnthe identity provider generates an authentication assertionwhich indicates that a user has been authenticatedsaml is a standard single sign-onssoformatauthentication information is exchanged through digitally signed xml documentsits a complex single sign-onimplementation that enables seamless authenticationmostly between businesses and enterpriseswith samlyou dont have to worry about typing in authentication credentials or remembering and resetting passwordsbenefits of saml authenticationwithout much adothe benefits of saml authentication includestandardizationsaml is a standard format that allows seamless interoperability between systemsindependent of implementationit takes away the common problems associated with vendor and platform-specific architecture and implementationimproved user experienceusers can access multiple service providers by signing in just oncewithout additional authenticationallowing for a faster and better experience at each service providerthis eliminates password issues such as reset and recoveryincreased securitysecurity is a key aspect of software developmentand when it comes to enterprise applicationsit is extremely importantsaml provides a single point of authenticationwhich happens at a secure identity providerthensaml transfers the identity to service providersthis form of authentication ensures that credentials dont leave the firewall boundaryloose coupling of directoriessaml doesnt require user information to be maintained and synchronized between directoriesreduced costs for service providerst have to maintain account information accross multiple servicesthe identity provider bears this burdenhow does saml authentication really worklets take an in-depth look at the process flow of saml authentication in an applicationsaml single sign-on authentication typically involves a service provider and an identity providerthe process flow usually involves the trust establishment and authentication flow stagesconsider this exampleour identity provider is auth0our service provider is an enterprise hr portal called zagadatnotethe identity provider could be any identity management platformnowa user is trying to gain access to zagadat using saml authenticationthis is the process flowthe user tries to log in to zagadat from a browser zagadat responds by generating a saml requestexample of a saml requestthe browser redirects the user to an sso urlauth0auth0 parses the saml requestauthenticates the userthis could be via username and password or even a two-factor authenticationif the user is already authenticated on auth0this step will be skippedand generates a saml responseexample of a saml responseauth0 returns the encoded saml response to the browserthe browser sends the saml response to zagadat for verificationif the verification is successfulthe user will be logged in to zagadat and granted access to all the various resourcesprocess flow diagramnote the attributes that are highlighted in the saml request and responseheres a little glossary of these parametersidnewly generated number for identificationissueinstanttimestamp to indicate the time it was generatedassertionconsumerserviceurlthe saml url interface of the service providerwhere the identity provider sends the authentication tokenissuerthe nameidentityof the service providerinresponsetothe id of the saml request that this response belongs torecipientof the service providerasidesaml authentication with auth0with auth0saml authentication is dead simple to implementwe can easily configure our applications to use auth0 lock for saml authenticationin the example belowwe will use an auth0 accountaccount 1as a service provider and authenticate users against a second auth0 accountaccount 2which will serve as our identity providerfollow the steps below1establish two auth0 accountsif you do not already have two auth0 accountsyou will need to create themif you do already have two accountsyou can skip to step #2in the auth0 dashboardin the upper right cornerclick on the name of your account and in the popup menu which appearsselectnew accountin the window which appearsenter a name for your second account in theyour auth0 domainfield and press thesavebuttonyou can switch back and forth between the accounts by going to the upper right corner of the dashboardclicking on the name of the current accountand using the popup menu which appears to switch between your accounts2set up the auth0 idpin this section you will configure one auth0 accountto serve as an identity provideryou will do this by registering a clientbut in this casetheclientyou register is really a representation of account 1the saml service providerlog into account 2in the auth0 dashboardclick onclientslink at leftclick on the red+ create clientbutton on the rightin the name fieldenter a name likemy-auth0-idppress the blueclick on thesettingstabscroll down and click on theshow advanced settingslinkin the expanded windowscroll down to thecertificatessection and click on thedownload certificatelink and select pem from the dropdownto download a pem-formatted certificatethe certificate will be downloaded to a file calledyour_tenantpemsave this file as you will need to upload this file when configuring the other auth0 accountendpointstab and go to thesectioncopy the entire contents of thesaml protocol urlfield and save it as in the next step you will need to paste it into the other auth0 accountnextcreate a user to use in testing the saml sso sequencein the auth0 dashboard+ create your first userin the email fieldenter an email for your test userthe domain name for the email should match what you enter in section 3 belowfor exampleif your user is johndoe@abc-examplecomyou would enter that hereand then enterabc-examplein step 3 below for the email domainenter a password for the userfor the connectionleave it at the default valueusername-password-authentication3set up the auth0 service providerin this section you will configure another auth0 accountso it knows how to communicate with the second auth0 accountfor single sign on via the saml protocollog out of account 2 and log into account 1connectionsin theof options belowenterprisein the middle of the screensamlp identity providerclick on the bluecreate new connectionbuttonin thecreate samlp identity providerconnection windowenter the following information into theconfigurationconnection nameyou can enter any namesuch assaml-auth0-idpemail domainsin this examplewe will use the lock widgetso in the email domains field enter the email domain name for the users that will log in via this connectionif your users have an email domain ofyou would enter that into this fieldyou can enter multiple email domains if neededmake sure the test user you created in section 2 has an email address with email domain that matches what you enter heresign in urlenter thefield that you copied in section 2 abovefrom account 2 dashboardapps/apis linksettings tabadvanced settingsendpoints sectionsaml tabfieldsign out urlenter the same url as for the sign in url abovex509 signing certificateupload certificatebutton and select thepem file you downloaded from account 2 in section 2 aboveyou can ignore the rest of the fields for nowbutton at the bottomafter pressing thea window will appear with a redcontinueyou might have to scroll up to see itin the window that appearsmetadata about this saml provideris displayedyou will need to collect two pieces of information about this auth0 accountthe service providerthat you will then paste into the other auth0 account you set upthe identity providerfirstlook for the second bullet in theof information that tells you theentity idit will be of the form urnauth0your_connection_namecopy and save this entire entity id field fromurnall the way to the end of the connection namein that same windownear the bottomthere is a line that saysyou can access the metadata for your connection in auth0 herecopy the url below that line into your browser address barin generalyou can access the metadata for a saml connection in auth0 herehttps//your_auth0_domain/samlp/metadataconnection=your_connection_nameonce you go to that metadata urlit will display the metadata for the auth0 account 1service provider side of the federationit will look something like the following with your account name in place of thexxxxxyou need to locate the row that starts withassertionconsumerserviceand copy the value of thelocationit will be a url of the form https//your_auth0_domaincom/login/callbackcopy and save this urlthis is the url on account 1 that will receive the saml assertion from the idpin the next section you will give this url to the idp so it knows where to send the saml assertion4add your service provider metadata to the identity providerin this section you will go back and add some information about the service providerto the identity providerso the identity provider auth0 account knows how to receive and respond to saml-based authentication requests from the service provider auth0 accountlog out of account 1 and log back into account 2for account 2click onfind the row for the client you created earlierand click on theadd onsicon to the right of the client namethe angle bracket and slash iconlocate the box with thesaml2 web applabel and click on the circle toggle to turn it greena configuration window will pop up for theaddonmake sure you are in theapplication callback urlpaste in the assertion consumer service url that you copied and saved in section 3 abovethe last stepin the settings field belowgo to line 2 that has theaudienceattributefirst remove the//at the beginning of the line to uncomment itreplace the original valuefoowith the entity id value you saved and copied in step 3 abovethe new line 2 should look something likebutton at the bottom of the screen5test identity providerin the same screendebugthat will trigger a login screen from account 2log in with the credentials for account 2if your configuration is correctyou will see a screen titledit worksthis screen will show you the encoded and decoded saml response that would be sent by the identity providercheck the decoded saml response and locateabout half-way down<audience>tag and make sure it matches the entity id you entered in the previous screenobtained during step 3close this windowat the bottom of the screen6register a simple html application with which to test the end-to-end connectionin this sectionyou will register an application in auth0 that will use the saml connection you set up in the above stepsmake sure you are logged into the account 1 auth0 dashboard+ create appmy-html-saml-appallowed callback urlsenter http//jwtioof allowed callback urls is aof urlsto which users will be redirected after authenticationthe urlentered here must match thecallback urlin the html code created in the next stepnormally you would enter a url for your applicationbut to keep this example simpleusers will simply be sent to the auth0 jwt online tool which will provide some information about the jason web token returned at the end of the authentication sequencesave changesbutton at the bottom of the screenin the same screenin the row that says quick startsettings etcscroll down to the section near the bottom where it saysfind the row for the saml connection you created above and click on the on/off toggle at right so that it is greenforonthat enables the saml connection for this application7test the connection from service provider to identity providerin this sectionyou will test to make sure the saml configuration between auth0 account 1service providerand auth0 account 2identity provideris workingnavigate toconnections -> enterprise -> samlp identity providerclick on the triangulartrybutton for the saml connection you created earlierthis button is to the right of the name of the connectionyou can hover your mouse over the button to have the text label appearyou will first see a lock login widget appear that is triggered by the service providerenter the username of the test account you created earlieryou will then be redirected to the lock login widget of the identity providerlogin with the credentials for the test user you createdif the saml configuration worksyour browser will be redirected back to an auth0 page that saysthis page will display the contents of the saml authentication assertion sent by the auth0 identity provider to auth0 service providerthis means the saml connection from auth0 service provider to auth0 identity provider is workingnotethe try button only works for users logged in to the auth0 dashboardyou cannot send this to an anonymous user to have them try it8create the html page for a test applicationin this section you will create a very simple html page that invokes the auth0 lock widget which will trigger the saml login sequencethis will enable an end-to-end test of the saml ssocreate an html page and insert the followingdoctype html public "-//ietf//dtd html 20//en">html>body>p>click on the button to log in </p>script src="http//cdncom/js/lock/102/lockminjs"/script>script type="text/javascript"var lock = new auth0lock'your_client_id'your_auth0_domain'{ redirecturlio'responsetypetoken'auth{ params{scopeopenid'} } }function signin{ lockshow} <button onclick="signin"login</button>/body>/html>make sure you replace your-app-client-id with the actual value of the app you registered in step 7 abovethe client id for your client can be found in the auth0 dashboard for account 1 by going tolink and clicking on thegearicon to the right of your clients namesave this file in a place where you can access it via a browserfor this examplewell call it *hello-samlhtml9test your sample applicationin this stepyou will test your sample html application that uses the auth0 saml connection you set up in account 1 to perform sso via saml against account 2serving as the saml identity provideropen the html file created above with a browseryou should first see a white page with a login button on itclick on the login buttonthe auth0 lock widget should appear with one login optionif you have other connections turned on for your clientyour auth0 lock widget may look slightly differentif you are prompted to enter an email addressmake sure the email address you enter has the same domain name as the domainyou entered in the settings tab for the client in the account 1 auth0 dashboardapps/apis -> settingsafter entering your email addressthe blue button on the lock widget may have a new labelclick on the button which may be labeledor access or with the email domain of the email address you are logging in withto initiate the saml sso sequence with the auth0 identity provideryou will be redirected to the identity provider to log innote that whether you are prompted for credentials at this point depends on whether you still have an active session at the identity providerfrom thetry metest you did earlieryou may still have an active session at the identity providerif this is the caseyou will not be prompted to log in again and will simply be redirected to the callback url specified in the html fileremember that this callback url must also be in the allowed callback urls in the clients settings tab in the auth0 dashboardif sufficient time has passedor if you delete your browser cookies before initiating the testthen you will be prompted to login when redirected to the identity providerlog in to the identity provider using the credentials for the test user you created in auth0 account 2upon successful authenticationyou will be redirected to the callback url specified in the html filejwt10troubleshootingthis section has a few ideas for things to check if your sample doesnt worknote that if your application doesnt work the first timeyou should clear your browser history and ideally cookies each time before you test againotherwisethe browser may not be picking up the latest version of your html page or it may have stale cookies that impact executionwhen troubleshooting ssoit is often helpful to capture an http trace of the interactionthere are many tools that will capture the http traffic from your browser for analysissearch forhttp traceto find someonce you have an http trace toolcapture the login sequence from start to finish and analyze the trace to see the sequence of gets to see how far in the expected sequence you getyou should see a redirect from your original site to the service providerand then to the identity providera post of credentials if you had to log inand then a redirect back to the callback url or the service provider and then finally a redirect to the callback url specified in your clientbe sure to check to make sure cookies and javascript are enabled for your browsercheck to make sure that the callback url specified in the html file is also listed in the allowed callback urls field in thetab of the client registered in the auth0 dashboardin dashboardclick on clients linkthen on theicon to the right of the clientthe http//samltoolio tool can decode a saml assertion and is a useful debugging toolauth0 also provides several optionshow to configure auth0 to serve as an identity provider in a saml federationhow to configure auth0 to serve as a service provider in a saml federationsaml configurations for sso integrations such as google appshosted graphitelitmoscisco webexsprout videofreshdesktableau serverdatadogegenciaworkday and pluralsighthow to configure auth0 to use other identity providers such as oktaoneloginpingfederate 7salesforcesiteminder and ssocircleconclusionwe have covered how saml authentication works and also went through some steps to implement it in an applicationyou want to implement saml authentication in your appsign up for auth0 and implement saml authentication seamlessly today", "image" : "https://cdn.auth0.com/blog/SAMLLogo.png", "date" : "December 05, 2016" } , { "title" : "Modern Authentication for Your Clients Made Easy", "description" : "Legacy username and password authentication is not enough. Learn about modern authentication and how Auth0 can help you implement it for your clients.", "author_name" : "Ado Kukic", "author_avatar" : "https://s.gravatar.com/avatar/99c4080f412ccf46b9b564db7f482907?s=200", "author_url" : "https://twitter.com/kukicado", "tags" : "security", "url" : "/modern-authentication-for-your-clients-made-easy/", "keyword" : "tldr whether youre a small consultancy working with a few clients or have been around for some timethe authentication needs of your customers are likely similarlegacy authentication via username and password is not enoughyour customers are demanding multifactor authenticationsingle sign-onand greater securitybuilding these features in-house is complexbut with auth0 it doesnt have to bebeing a consultant or systems integrator means wearing many hatsto be successful you must be knowledgeable about various technologiesmethodologiesand best practices as well as know how to put them all together to meet your clientsgoalsthe technological landscape is constantly changing and keeping up can be a challengewhen it comes to authentication and securityusernames and passwords are simply not enoughmodern authentication focuses onmanaging user identitytop tier security and availabilitymultiple authentication strategiesmultifactor authenticationsingle sign-onpasswordless authenticationand morekeeping up with modern authentication and security best practices would be challenging on its ownbut implementing it alone would require a herculean level of effortat auth0we can provide your consulting organization with the tools and support to implement modern authentication for your clientsbuilding modern authentication is difficulttweet this auth0 for systems integratorswhether youthe authentication requirements from your customers are likely similarauth0 can provide a comprehensive authentication platform that your clients will loveyour organization will benefit from being able to provide modern authentication in record timelets take a look at some of the featuresmodern authenticationauthentication starts at the login screenthe lock widget provides a modern login widget that supports traditional username and passwordsocialand enterprise authenticationthis cross-platform widget can be implemented in as little as ten lines of code but supports various branding and configuration options to truly make it your ownif you prefer to implement your own unique ui insteadour restful api provides the same functionality so you can have full controlenterprise federation and single sign-onenterprises require governance over their users and will demand single sign-onimplementing and maintaining integrations with the various enterprise federation providers would be a very challenging taskauth0 has built integrations with leading platforms like active directorypingadfsldapsaml and more so all you have to do is provide the configuration details and well take care of the restsocial connectionsallowing users to log in with their existing social accounts can increase conversion rates and improve the user experienceauth0 supports over thirty social connection providers including facebooklinkedintwitterand googleauth0 can additionally be integrated into any provider that supports oauth 2traditional authentication with enhanced security featuresif username and password authentication is a requirementauth0 has you coveredtraditional username and password or email and password authentication is provided with enhanced security featuresmultifactor authentication greatly enhances the security of this type of authentication and can be added with the flip of a switchanomaly detectionwhich prevents brute force attacksas well as breached password detection go a step further to provide greater security for your userspasswordless authenticationpasswordless authentication allows your users to log in without having to enter or remember a passwordinsteadwhen the user wants to log inthey provide their email address or phone number and receive a one time passcode that they use to authenticate insteadauth0 provides various options for passwordless authentication including touchidauth0 partners programauth0 solves authenticationidentityand security challenges for organizations by providing a complete platform for managing modern identityour goal is to make identity simple and easyso you can spend less time worrying about authentication and focus on your clients and their needscontact us today to learn more about becoming an auth0 technology partner", "image" : "https://cdn.auth0.com/blog/systems-integrators-post/logo.png", "date" : "December 02, 2016" } , { "title" : "What are the different ways to implement Multifactor Authentication?", "description" : "Learn how the different types of Multifactor Authentication work!", "author_name" : "Prosper Otemuyiwa", "author_avatar" : "https://en.gravatar.com/avatar/1097492785caf9ffeebffeb624202d8f?s=200", "author_url" : "https://twitter.com/unicodeveloper", "tags" : "Authentication", "url" : "/different-ways-to-implement-multifactor/", "keyword" : "tldrmultifactor authentication involves providing an extra layer of security by ensuring users provide more than one piece of information for identificationit typically requires a combination of something the user knowssuch as pinspasswordssecret questionsand something the user hassuch as cardshardware tokensphoneworthy of note is that two-factor authentication is the most used type of multifactor authenticationmfamore information about multifactor authenticationcan be found herein this article we will go over why we should implement multifactor authentication and the different ways to implement itwhy should we implement multifactor authenticationthere have been several cases of stolen and hacked passwordssystems with just simple username and password combinations getting hacked have been on the risein this situationimplementing multifactor authentication will prevent hackers from gaining access to your accounts even if your password is stolenthe extra layer of protection that mfa offers ensure that your account is more securewhat are the different ways to implement multifactorill highlight various ways to implement multifactor below and an in-depth analysis of the process will be provided in the later part of this postll cover multifactor viatime-based one-time passwordtotpshort message servicesmselectronic mailemailpush notificationshow time-based one-time password workstotp involves the generation of a one-time password from a shared secret key and the current timestamp using a specific kind of cryptographic functionthese cryptographic functions can vary across the boarda simple example of a cryptographic function is sha-256totp is defined in rfc 6238the process flow for a typical multifactor application using totp involves the enrollment and login processesthe enrollment process is as followsa user logs into a website/app with a username and passwordif the credentials are validthe next stage involves enabling two-factor authentication for the usera shared-key is requestedin form of text or qr codethe key is stored by an app that implements totp such as google authenticatoror auth0 guardiantwo-factor authentication is enabledthe login process is as followsthe user is directed to another form where he/she is required to enter a one-time code generated from google authenticator or auth0 guardianthe server verifies that the code is valid and finally authenticates the useran alternative implementation is the use of rsa keysrsa authentication is basically based on two factorsa password/pin and an authenticatorthe authenticator might be a hardware or software tokena hardware or software token is assigned to a userduring loginafter entering the password/pinthe user clicks on the token and an authentication code is generated at fixed intervalsusually about 60secondsusing a built-in clock and the devices factory-encoded random keythe key is different for each token and is loaded into the corresponding rsa authentication managernotethe generated codes are time-based so the client and the server need to synchronize their clocks for this to work efficientlyhow short message serviceworksthe process for a typical multifactor application using sms also involves the enrollment and login stagesa user logs into a website/application with a username and passworda user is asked to enter a valid phone numberprobably in the settings pagea unique one-time code is generated on the server and then sent to the phone numberthe user enters the code into the app and multifactor is enableda unique one-time code is generated on the server and then sent to the registered users phone numberthe user enters the code into the appif its validthe user is authenticated and a session is initiatedhow electronic mailthe process for a typical multifactor application using email is as followsa unique one-time code is generated on the server and sent via email to the userthe user retrieves the code from the email and enters the code into the apphow push notifications workthe process for a typical multifactor application using push notification is as followstypicallypush notifications work with applications such as auth0 guardiana push notification is sent to the guardian app on your mobile devicethis notification is a login requestit includes information such as the application namethe os and browser of the requestthe location and the date of the requestthe user accepts the request &automatically the user becomes logged inasidedifferent ways to implement multifactor with auth0implementing multifactor with auth0 is a breezethe various ways to implement multifactor with auth0 are as followspush notifications with auth0 guardianguardian offers a frictionless approach to implementing mfa for your appsand provides a full mfa experience without requiring integration with third-party utilitiesyou can find out how to implement push notifications with auth0 guardiansmsauth0 supports sending an sms with a one-time password code to be used for another step of verificationtotp with google authenticator and duolearn how to enable google authenticator and duo securitycustom providers such as yubikeycontextual mfa withscripted rulessign up for a free account today and enjoy fastseamlesshassle-free multifactor authentication in your appsconclusionwe have covered the different ways to implement multifactor authentication in an application and how they worksign up for auth0 and add that extra layer of security to your apps today in a breeze", "image" : "https://cdn.auth0.com/blog/MFALogo.png", "date" : "November 30, 2016" } , { "title" : "What the New NIST Guidelines Mean for Authentication", "description" : "Learn about NIST's Digital Authentication Guideline and what it means for authentication security.", "author_name" : "Kim Maida", "author_avatar" : "https://en.gravatar.com/userimage/20807150/4c9e5bd34750ec1dcedd71cb40b4a9ba.png", "author_url" : "https://twitter.com/KimMaida", "tags" : "security", "url" : "/what-the-new-nist-guidelines-mean-for-authentication/", "keyword" : "tldrthe us national institute of standards and technologynistis creating new policies for federal agencies implementing authenticationlearn about nist special publication 800-63-3digital authentication guideline and what it means for authentication securitynist digital authentication guidelinethe us national institute of standards and technologythe draftcalled special publication 800-63-3is available on the nist website as well as on nists githubthe suite of documents includes the following800-63-3digital authentication guidelineoverview800-63aenrollment &identity proofing800-63bauthentication &lifecycle management800-63cfederation &assertionsthe policies are intended for federal agency applicationsbut serve as a standard for many others as wellnist improved password requirementsthe nist digital authentication guideline strives for improved password requirementsone of the guiding principles is better user experience and shifting the burden to the verifier whenever possiblein order to support the creation of passwords that users will remember while implementing excellent securityseveral guidelines are importantlength8 character minimum>64 character maximumcompare new passwords to a dictionary and dont allow commoneasily-guessed passwordssuch as passwordabc123etcallow all printing characters + spacesshould offer option to show password rather than dots or asteriskshelps typing accuracydont enforce composition rulesienopasswords must include uppercase and lowercase lettersa numbersuch rules provide a poor user experiencedont use password hintsthey weaken authenticationdont expire passwords arbitrarilyregular expiration encourages users to choose easy-to-guessless secure passwordsdont use knowledge-based authenticationkbanist guidelines for password storagenist also supplies guidelines for the verifiers encryption and storage of passwordsthese policies ensure that passwords are stored securelypasswords shall be hashed with 32-bitor greaterrandom saltuse approved key derivation function pbkdf2 using sha-1sha-2or sha-3 with at least 10000 iterationspasswords should use keyed hmac hash with the key stored separatelynist on multifactor authenticationnist recommends utilizing out-of-bandoobauthentication to provide 2-factor authentication2fathe guidelines also state that sms is deprecated for oob authenticationsms can be compromised by a variety of threats such as smartphone malwaress7 attacksforwardingchange of phone numberand moreexamples of non-sms oob authenticators include auth0 guardianduo mobileand google authenticatornist states that use of biometrics must be with another authentication factor for multifactor authenticationconclusionoverallthe new guidelines put the user experience at the forefront while also establishing more robust storage and authentication methodsalthough the nist digital authentication guideline governs federal sitesits tenets are good standards for any app or site with authenticationthe guideline is currently in draftwhen the policies are finalfederal agencies as well as many other companies and vendors will make strides to comply with the new guidelines for improved authentication security and user experienceto learn morecheck out the nist draft 800-63-3 itself and jim fentonstoward better password requirementspresentation", "image" : "https://cdn.auth0.com/blog/nist-auth-guideline/NISTimage.png", "date" : "November 29, 2016" } , { "title" : "How To Stay Safe While Shopping Online For Cyber Monday", "description" : "Learn these six tips that will help you shop online safely on Cyber Monday", "author_name" : "Prosper Otemuyiwa", "author_avatar" : "https://en.gravatar.com/avatar/1097492785caf9ffeebffeb624202d8f?s=200", "author_url" : "http://twitter.com/unicodeveloper?lang=en", "tags" : "shopping", "url" : "/how-to-stay-safe-while-shopping-online-for-cyber-monday/", "keyword" : "technology has made it very easy for people to purchase goods and services from the comfort of their homescyber monday is approachingand a lot of people will be online watching the prices of their favorite items fallready to purchase them when the price drops far enoughbillions of dollars will be spent online during this popular end-of-the-year sales eventits well known that a lot of online transactions happen during the months of octobernovemberand decemberunfortunatelya lot of fraudulent online activities happen during this time as wellcyber criminals are most active during holidaysand a lot of online shoppers are negligent about cyber security threatsthese kind of users make easy targets for hackers and cyber-criminalshow do you stay safe while shopping onlineill expatiate on six tips that can help you shop safely online on cyber monday1ensure that the site is secure by looking for https in the urlonce you visit a websiteensure that the site has secure sockets layersslencryption installedthe website will start with https// and a locked padlock will also be present in the address barnear the urldont attempt to buy anything with your credit/debit card on a website that has no ssl installedmake sure you shop on secure sites2avoid using weak passwordshackers maintain aof commonly used passwords that are deployed via bots to try on various websites at a timeusing a weak password increases the risk that your credentials could be randomly generated in those listsif you currently have a weak password on the sites you shop with regularlys time to change to a very strong passwordhere are some tips to help you create a secure passworduse a long passwordincluding at least 10 characters with a combination of numberssymbolsand upper and lowercase lettersavoid using common information such as your namesocial security numbernicknameand so on3look out for fake and forged sitesduring this shopping periodcyber criminals go to great lengths to replicate popular websitesusers can then get tricked into providing sensitive information such as usernamespasswordssocial security numbersand credit card details on fake versions of popular sitesthinking that they are logged into the real sitehere are some tips to ensure that you dont log into a forged sitelook out for the encryption symbolpadlockin the urlverify that the site is secure by ensuring that the url starts with https// instead of httplook out for typos in the site name and urllook out for fake ads and emailsuse anti-phishing softwaresuch as kapersky or avast4try as much as possible to shop with trusted brandsthere are various shopping websites that have earned a strong reputation over timei advise that you shop on those websites that are globally recognized as trusted brandsthese brands have experienced cyber monday events various times and have improved their tech to provide very robust security measures to handle the numerous online shoppers who will log into their platformnotea lot of cyber criminals try to play on this mentality by creating fake versions of these popular websitesas i highlighted earlier in this postlook out for forged sitessourcehttps//thebrandthattimeforgotfileswordpresscom5look out for malware and ad scamsduring this seasonsocial network scams and malware are on the increaselots of fake deals exist on the internetthese fake dealswhen clickeddirect you to fake sites and trick you to downloadin some casesantivirus software or software that can make you claim hot dealsin most casesthis software is malware that can steal information from your computer when installedhere are some tips to ensure that you are not a victim of these scamsensure that you have an up-to-date antivirus program installed on your computerensure that your computers operating system is up to datet open urls that seem suspicioushttp//adaboutscomyou might see various ads that say claim thisor claim thatlike the one belowscam ads6avoid deals that are too good to be truehumansby natureare said to be driven by greedso cyber attackers prey on users with this mindset by tricking users with incredible dealsduring this seasonvarious deals appear on social media that are just too good to be truethere is a popular twelve scams of christmascompiled by mcafeethiscontains the popular examples of too good to be true dealstry as much as possible to avoid deals like thisasidestay secure with auth0as a developeryou can use auth0 lock to authenticate your usersthe lock widget uses https to ensure that usersinformation is transmitted securelyyou can also use auth0s breached password detection feature to ensure your users are protected from compromised credentialsfuthermoreyou can define password policies to customize the level of complexity of the passwords a user enters during sign-upsauth0 offers five levels of security matching the owasp password recommendationsnonedefaultthe password must exist and be at least one character longlowthe password must be at least six characters longfairthe password must be at least eight characters long and must contain a lower case letteran upper case letterand a numbergoodthe password must be at least eight characters long and must contain at least three of the following four charactersa lower case lettera numberor a special charactereg@#$%^&*excellentthe password must be at least 10 characters longit must contain no more than two identical characters in a rowaaais not allowedit must contain at least three of the following four types of charactersand a special characterconclusionwe have covered various tips on how to stay safe while shopping online on cyber mondayone more tipits safer to use credit cards to shop onlinewith credit cardsif there are suspicious unauthorized transactions on your cardyou can contact your bank to reject the transactions and have your money refundedhappy safe shopping", "image" : "https://cdn.auth0.com/blog/cybermondaylogo.png", "date" : "November 25, 2016" } , { "title" : "US Navy Data Leaked: 10 Tips to Protect Sensitive Data from Theft", "description" : "The recent leak of US Navy sailors' personal information puts in the spotlight the difficulties of keeping sensitive data secure, here's what you can do about it", "author_name" : "Sebastián Peyrott", "author_avatar" : "https://en.gravatar.com/userimage/92476393/001c9ddc5ceb9829b6aaf24f5d28502a.png?size=200", "author_url" : "https://twitter.com/speyrott?lang=en", "tags" : "security", "url" : "/navy-data-leaked/", "keyword" : "the us navy was recently notified by a contractor of a major leak of sailors personal informationalthough the leak is currently under investigationthe perpetrators purportedly used a stolen laptop with either compromised credentials or downloaded datain this post we will study the different ways in which sensitive data can be protected in case of device theftread onproperty theft must be accounted for when sensitive data is at staketweet this introductionin this post we will go over a very common scenariotheft of portable devices carrying sensitive informationwhen this happensproper policies can make all the difference between minor and major lossesin this caseit was the personal information of more than 130000 sailorsincluding their social security numbersthe immediate impact of the leak may not be majorbut the long term ramifications are hard to guess at this pointbut what if corporate banking account credentials were leakedor insider information that could crash stock pricessometimes it is something as little as a laptop or smartphone that can change the course of a major company or a persons lifeso what can be done about itthe usual scenariosat this pointwe know next to nothing about what the perpetrators did to access the sailors informationso lets consider three different hypothetical scenariosall of them are based on the premise of a stolen device with sensitive information stored in itscenario 1cached credentialscached credentials are probably one of the most common casesmost users rely on password managers or active sessions to access most of their servicesthink of your emailonce you have logged init rarely asks again for a passwordcertain providerssuch as googlemake sure to reauthenticate after a certain number of daysthis is a good policyhoweverit is not enoughonce you have accessed a compromised email accountit is very easy to access other accounts through password recovery mechanismsthis is a case where two factor authentication makes a big differenceeven if a malicious user can reset your passwordhe or she cannot access an account protected with two factor authentication without controlling the second factormost services do not reauthenticatein the case of emailthings can even be worsee-mail not only is the gateway to password recovery mechanismsbut to impersonation and social engineeringwhich we will discuss belowscenario 2cached dataanother possible scenario deals with actual cached data on the stolen devicealthough unlikely for big amounts of datastorage is cheapit is not preposterous to think a developer working on a critical piece of infrastructure would keep some data local to speed up developmentthis is a case were encryption and policies for sensitive data storage help tremendouslybut even thenencryption is only as strong as its authentication mechanisma weak password defeats any encryptionscenario 3social engineeringthis scenario is often overlookedonce you have enough tools to impersonate a useryou can start to pull strings until you get the level of access required for your malicious purposesthe owner of the laptop may not have had access to the sailors databut what if he had an active email session in his computerthe malicious user could have used his or her email account to request temporary access to the protected resourceto run tests for instancethis is another scenario were proper policies and development practices can make or break your chain of securitypolicies to protect your dataso what do all of these scenarios have in commonthat there are proper policies that can be used to mitigate the effects of device theftsome of these policies are easy to enforcewhile others require strict adherence from usersthis is a key factor in security that is often overlookedthe weakest links in the chain are usually peopleso the more you can automatically enforcethe betterunfortunatelypeople do not like to feel constrainedso balance must be soughtwhat policies could we use to prevent theft of information in the above scenarioslets find out1enforce device encryption and strong passwordsdevice encryption is cheap nowadayscorporate laptops and all modern smartphones support encryption out of the boxthe level of encryption should be such that no access to a users profile or account is possible without authenticationin other wordspartial encryption is not enoughencrypting the documents directory is goodbut it is much better to encrypt the whole user accountconsider what would happen if you sent a key document through email and your local account password was resetan active email session could be used to download the encrypted file you forgot to delete from your emailmacos provides filevaultwindows provides bitlockerios provides encryption by default when an unlock code is setand android has the option as wellthere is just no excuse to leave any of these options offeven betterdo encrypt your documents folder using any tried-and-true encryption tooleven after setting up filevault or bitlockerdo not reuse credentialssensitive information can never be too safefor our laptop theft scenarioencryption may have been disabled or enabled only for certain foldersif such is the caseit is simple to reset a users account password with physical access to a systemand if the password is reset and no encryption is in placeany local data is readable2set authentication timeouts and triggersencryption may as well be disabled if reauthentication is not performed regularlythink of ityou are at a coffee shop logged-inyou have encryption on and all is safeyou feel the urge to go to the bathroomyou close the laptop and you put it in your backpackyou ask your friend to look after your backpack while you are awayyou come back only to realize your friend got distracted for a second and your backpack is missingwhat if your account does not require reauthentication after closing the lidyou are now compromisedproper triggers for this areshort timeoutsten minutes is an eternity for sensitive dataconsider a timeout as short as possibleconsider enforcing a personal policy of closing or locking the laptop whenever you are awayrequire reauthenticating after the lid is closed or the browser is closedthe incognito or private browsing modes of all browsers enforce thisany session opened will be closed after closing the tab containing itdo not use thelock after x minutes when the lid is closedoptionthis leaves a short windows of opportunity for an attackerrequire reauthentication after any screen off timeoutthis applies to both laptops and phoneslock after x minutes when the screen is offmake it lock itself whenever the screen goes blank3require the use of two factor authenticationtwo factor authentication makes single-device theft much less of a threateven if the malicious user were to gain access past the screen lockaccess to a protected resource would require another device to authenticate the usertwo factor authentication can be defeated if it is not required for frequent accesssensitive information should always require authenticationincluding two factor authenticationin our laptop scenarioif two factor authentication were not enabled and access to an opened browser were availablepassword autocompletion could be used to access a protected resourcetwo factor authentication would require access to a device that was not stolen4require authentication for every stored password entry procedurepassword managers are quite common todaythe combination of strong password requirements plus a plethora of services makes remembering passwords quite a featpassword managers attempt to help in this matter by providing secure and convenient storage for passwordsthe downside of having a central place for passwords is that if the master password gets outaccess to many different services is possiblethereforemaster passwords must be strong and never reusedfurthermoreaccess to the password database must only be performed after immediate authenticationcached access to it can leave a short window of access for potential malicious userseven a full password database dumpa password manager could have been available in the stolen computerif the malicious user gained access to an unencrypted user sessionusing the password manager without authentication might have been possible5require authentication every time access to a sensitive resource is requestedsome resources are more sensitive than othersalthough quick access whenever you want to send an email is convenientkeeping full access to a database of 130000 users enabled is notestablish access policies for each resource according to its sensitivityin our laptop scenerioaccess to the user database should have been only possible after authentication and only for short period of timeand credentials should not have been cachedyou have probably seen this practice in the wild alreadygmail requires authentication when changing account settingseven if you have recently authenticatedmost sites also require authentication whenever changing the passwordthis prevents common away-from-keyboard or leaked session attacks6teach users to follow a no-local-copy policy for sensitive informationthis is a key policy for users and developers alikealthough it is hard to enforce from a device policy point of viewusers can be educated to only store sensitive data as long as necessarya developer requesting access to a production database to test a corner case is acceptableletting that developer make a full dump of the database for quick testing purposes is notstoring sensible data in client side apps is not acceptable eitheras usualthe pros and cons for each resource must be weighed appropriatelyfor each case considerwhat would happen if the data were compromisedcan i trust users to follow the policyif notcan i automate it somehowthis is one of the likely scenarios for our laptop theft caseif the contractor had stored a local copy of the database to perform tests or develop a featurethat would be a violation of this policysee tip 10 for a way for developers to do their job without making this policy a pain to follow7have a remote or automated wipe procedure in placeafter everything has gone wrongthere is one other possible way to keep data safewipe itmost portable devices have some form of internet connectivityexploit thissometimes problematicfeature to your advantagehave a remote wipe procedure in placethis is already available in most smartphonesand in combination with tip 1 it really provides strong guaranteesby keeping the data encrypteda secure remote wipe procedure can be performed almost instantaneouslyin the old dayswhen keeping large amounts of data encrypted was not practical due to performancesecurely wiping the data would take timedue to the physical nature of storage devicesmultiple writes are required to make sure no data can be recovereddoing this for all stored data takes timenowadaysencryption can be combined with remote wipe proceduresby securely wiping the encryption keys required to decrypt the datarecovering what is stored is impracticalunless the encryption can be brokenautomated wiping is also very usefulsome smartphonesfor instancehave the option to perform a secure wipe after a certain number of wrong attempts to unlock it are performedthis limits the chances an attacker gets to access secure dataof courseremote wiping is only useful if the user reports this occurrence in timeor performs the remote wipe himselfin our laptop theft scenariothe user may have reported the occurrence to the it teamthe attacker may have had enough time to access critical dataoras many attackers do to increase their chanceshe or she may have disabled wifi or cellular connectivity to prevent the remote wipe command from being receivedthis is were automated wiping is essentialas long as the device is encrypteda skillful attacker may attempt to increase his or her chances of accessing data by imaging the storage device of the stolen devicefor certain devicesthis requires advanced electronics knowledge and equipmentfor other devicesit is as simple as taking a couple of screws outencryption and the no-local-policy for data are essential in combination with this policy to increase the chances of keeping data safe8use geofencing and geographical checks to report suspect accesses to sensitive datathis tip is of great use when combined with tip 7geofencing and geographical checks can quickly alert an it team of suspicious activity from a compromised deviceyou may have experienced this yourself alreadygoogle alerts users when simultaneous logins from distant places are active at the same timeexcept for vpns or tunnelsusers usually operate their account from a single place at a specific timewhen geographical checks failauthentication can be requestedor other actionssuch as remotely wiping the device or invalidating credentialscan be takenin contrast with geographical checkswhich usually compare the distance of simultaneous activity from the same set of credentialsgeofencing checks for the physical location of a device using wifi location services or gps hardware if availablegeofencing can enforce other policies or take specific actions when a device is taken outside its usual area of operationif the laptop had been enabled to operate freely only while on the premises or the developers homeand authentication were to be required when far from those areasthe attackers would have had a harder time doing their deedsthis policy requires the presence of other policies to be truly effectivetwo factor authentication might thwart an attackers action after geofencing forcefully closes an open or cached sessionencryption might make it harder for him or her to reset the user accounts local passwordtherefore blocking future attempts to access the system9keep your software up-to-datethis might seem like a no-brainerbut it is in fact ignored many timeshow many times have we postponed an update tojust finish this one thingor to prevent reopening a complex working sessionvulnerable software may prevent all other policies from working as they shoulda recently patched vulnerability may be used against a user who did not update his or her system to workaround the screen lockor it may be used to defeat encryptionor to disable the automatic wipe policyup-to-date software is just another link in the long chain of security policies and must always be followedthis does not prevent the use of 0-day exploitsbut it limits the tools an attacker has at his or her disposalgiving you more time to secure your datavia a remote wiperevoking credentialsa connectivity blackoutetc10for developershave usable mock data and credentials for local teststhis is an extension of the no-local-copy policytip 6developers sometimesin factmost of the timeneed actual data to develop and test software featuressometimesvery specific data may be needed to test or develop a featurethink of a rare corner case were some feature failsit may be caused by a very specific set of factors that may only be reproducible using production datathis is probably the worst case scenario for developmentbecause production systems are sensitive in various waysnot only from a security point of viewone way to go about this is to have mock or test dataa specially crafted development environment where development can be performedthis environment should not use oldbut still sensitivedatabecause setting it up in that way opens the possibility for leaks such as the one we have seen this weektruth is sometimes not even mock data can be used to debug certain corner casesreal data can be usedbut only after a proper policy for removalafter fixing the problemis in placethis need not be automatedbut it is generally better if it isdebugging using real data can also be limited to on-premise systemsit can be limited to non-portable systems for which the risk of theft is generally lowsecurity is about finding the right set of compromises for your caseif in doubtlean towards the more restrictiveless-convenient-but-more-securethe same thing applies for credentialsif you require credentials for development or testingnever use any credential that can be used for anything more than thatthe credentials should only expose and have access to mock data and never real production dataasidehow auth0 helps to protect your dataat auth0 security is of the utmost importanceour authentication and authorization services let you set up multiple policies that help greatly whem it comes to securitysome of these are two factor authenticationrules for logging out users automaticallyrevoking credentials or even selectively enabling two factor authautomatic breached password detection using a huge database of leaksauth0 guardian to simplify multiple factor author per resource authenticationusing emailtotpsms or push notificationspasswordless authenticationto remove the need for users to come up with secure passwordsand per-client on-site installationsto keep things in house for the most sensitive systemssign up today and try a free account to get a taste of how easy these features can be set upconclusionsecurity is hardone of the ways of dealing with the difficulties associated with it is by keeping multiple layers of security that complement each otherthe policies and tips discussed in this article are some of the most basic and simple to implementwhen it comes to sensitive informationthe more layerswith the advances in modern operating systems and portable devicesenabling many of these is just a matter of a couple of clicksothers require disciplined users and contrained environmentsif your job involves working with sensitive datathere is no excuse to not follow some of these tipsmake sure you only keep as much data as necessary to complete the jobmake sure it is not easy to access it in case of theftand keep proper automated and remote procedures to make sure that data is wiped whenever it is not required anymorefollowing these tips might just save your jobor your employers", "image" : "https://cdn.auth0.com/blog/navy-leak/logo.png", "date" : "November 25, 2016" } , { "title" : "How Passwordless Authentication Works", "description" : "Learn the nitty-gritty of Passwordless Authentication", "author_name" : "Prosper Otemuyiwa", "author_avatar" : "https://en.gravatar.com/avatar/1097492785caf9ffeebffeb624202d8f?s=200", "author_url" : "http://twitter.com/unicodeveloper?lang=en", "tags" : "passwordless", "url" : "/how-passwordless-authentication-works/", "keyword" : "tldrsecurity is a key aspect of software developmentsecuring your authentication and authorization process cant be overemphasizedover the yearsdevelopers have come up with different strategies for handling authentication in a way that provides maximum security for the userone of the latest strategies is authenticating without passwordspopular applications like mediumslackand whatsapp widely support and encourage passwordless authenticationin this articleyoull get to understand the nitty-gritty of passwordless authenticationwhat is passwordless authenticationpasswordless authentication is a type of authentication where users do not need to login with passwordsthis form of authentication totally makes passwords obsoletewith this form of authenticationusers are presented with the options of either logging in simply via a magic linkfingerprintor using a token that is delivered via email or text messagehow did passwordless authentication come aboutcases of stolen and hacked passwords have been on the riseso many casessuch as the yahoo data breachdropbox user accounts leakand linkedin data breachhad to do with having several passwords leakedin additionplatforms and applications keep emerging by the day and users have to register and set passwords for almost every one of themusers are finding it really hard to keep upthus encouraging them to use the same password for several applicationsthis is a very common occurrencenowthere is a problem with this approachonce a hacker gets access to users password for one applicationthe hacker has a high probability of gaining access to every other account the user possessespassword managers like lastpass and 1password attempt to combat the challenge of users having to remember strongcrazyand unique passwords across various systemswith these challenges staring down at us like a monsterwhat if there are no more passwords to be hackedwhat if there are no more passwords for users to rememberwhat if we discard the use of passwords totallypasswordless authentication to the rescuebenefits of passwordless authenticationwithout much adopasswordless authentication helpsimprove user experiencethe faster users can sign up and use your servicethe more users your app tends to attractusers dread having to fill out forms and go through a rigorous registration processimagine eliminating that extra five minutes of asking users to remember their grandmothers maiden name as a security questionpasswordless authentication helps improve user experience in this regardincrease securityonce you go passwordlessthere are no passwords to be hackedhow does passwordless authentication really workhaving given a refresher on what passwordless authentication is and the benefits of its implementationlets take an in-depth look at the process of implementing passwordless authentication in a typical applicationpasswordless authentication can be implemented in various formsauthentication with a magic link via emailthe user is asked to enter their email addressonce the user submits the email addressa unique token or code is created and storedan email with a url that contains the unique token will be generated and sent to the userwhen the link is clicked by the useryour server verifies that the unique token is valid and exchanges it for a long-lived tokenwhich is stored in your database and sent back to the client to be stored typically as a browser cookiethere will also be checks on the server to ensure that the link was clicked within a certain periodegthree minutess take a look at auth0s magic link implementation belowauth0 sends a clickable link to your emailuser is then logged inauthentication with a onetime code via e-mailthe user is requested to enter their email addressan email is sent to the user with a unique onetime codeonce the user enters this code into your applicationyour app validates that the code is correcta session is initiated and the user is logged ins onetime code via email implementation belowif the email address matches an existing userauth0 just authenticates the user like soauthentication with a one-time code via smsthe user is asked to enter a valid phone numbera unique onetime code is then sent to the phone numberyour app validates that the code is correct and that the phone number exists and belongs to a usera session is initiatedand the user logged ins onetime code via sms implementation belowif the phone number matches an existing userauthentication with fingerprintthe user is asked to place their finger on a mobile devicea unique key pair is generated on the device and a new user is created on the server that maps to the keys fingerprint implementationauth0 supports touch id for iosthis is the authentication flowasidepasswordless authentication with auth0with auth0passwordless authentication is dead simple to implementthere are diagrams earlier in this post that already show the passwordless authentication flow using auth0you must have noticed passwordless api in those diagramsthis is a battle-tested and efficient api implementation of passwordless authenticationyou can check out how it works under the hood or simply build your own implementation on top of itwe can also easily configure our applications to use auth0 lock for passwordless authentications quickly create an application that implements magic link by following the steps belowclone this repocreate an auth0 account for freeon the dashboardclick on the red create client button to create a new app like sohead over to the passwordless connections side of the dashboard and enable email optionenable passwordless appenable magic linksave configuration for passwordless apphead over to your settings tab for the passwordless app and copy your client_id and domainsettings tabopen up auth0-variablesjs in your code and replace the auth0_client_id and auth0_domain values with your real auth0 keysrun and test the appclick the magic link buttonfollow the instruction and sign insubmit your email on the lock widgetnotification modal to show that the link has been sentmagic link from emailsigned in via the magic linkif you dont want to go through the process of creating an appthere is an online version you can play with hereconclusionthere is no doubt that passwords have become more susceptible to being compromised in recent yearspasswordless authentication aims to eliminate authentication vulnerabilitiesthis recent analysis of passwordless connections shows that passwordless adoption is increasingpasswordless authentication is also very useful and gaining ground in the iot worldits easierfriendlierand faster to be authenticated into an iot device via touch idpush notificationor even a onetime passcode than with traditional meansif you really care about securityyou should look into passwordless authenticationwe have covered how to implement practical passwordless authentication in an application using magic linksyou can follow a similar process to achieve the same objective using a onetime code via smssign up for auth0 and implement passwordless authentication todayif you care about securitytweet this", "image" : "https://cdn.auth0.com/blog/PasswordlessLogo.png", "date" : "November 23, 2016" } , { "title" : "Build and Authenticate a Node Js App with JSON Web Tokens", "description" : "Node Js allows you to build backend applications with JavaScript. In this tutorial we'll take a look at how you can secure your Node Js applications with JSON Web Tokens (JWTs).", "author_name" : "Ado Kukic", "author_avatar" : "https://s.gravatar.com/avatar/99c4080f412ccf46b9b564db7f482907?s=200", "author_url" : "https://twitter.com/kukicado", "tags" : "nodejs", "url" : "/building-and-authenticating-nodejs-apps/", "keyword" : "tldr node js brings the simplicity of javascript to the backendtodaywe will build an entire application with node jsstarting with a blank canvas and finishing with a fully functional application with multiple routesauthenticationand even remote data accesscheck out the completed code example from our github reponode jsor nodejsnodejsor simply node was first released in 2009 by ryan dahl and has become one of the most popular backend programming languages todaynode js is for all intents and purposes javascript but instead of running in the users browsernode js code is executed on the backenddevelopers familiar with javascript will be able to dive right in and write node js codein our tutorial todaywe will write a complete node js application using one of the most popular web frameworks for nodeexpress jswell cover everything from project setup to routingcalling external apisand morebefore we dive into the code lets understand why node js is so popular and widely used to give you a deeper understand of why you may want to use node js for your applicationsthe rise of node jsnode js became an overnight sensation for multiple reasonsperformance played a huge factornode js was built around an event-based architecture and asynchronous i/othis allowed node js applications to achieve superior i/o and load performance compared to other programming languagesnode js code is compiled to machine code via googles v8 javascript enginelets take a look at a few other factors that led to the rise of node jsjavascript on the backendjavascript as a programming language has many flawsit is also the only language that runs in the web browser todayif you want your website or app to have any type of dynamic functionalityyoull have to implement it in javascriptthis fact led many developers to learn javascriptand soon many open source libraries followeddue to node js being javascriptmany of these librariessuch as lodashmomentand request could be used on the backend without any modification whatsoeverin many instancesdevelopers were able to write their code onceand have it run on both the frontend and the backend allowing many to quickly become full-stack developersnode package managerthe node package mananger or npm is one of the biggest reasons for nodes popularitynpm allowed developers to easily manage all of the wonderful libraries released by the open source communitydevelopers could simply type a command like npm install lodashand the node package manager would go out and download the latest version of lodash into a special node_modules directory and then the developer could access the lodash library by just requiring it in their codenpm was revolutionary and to this day remains one of the best package managers aroundit was not the first package managernuget exists for thenet platformpip for pythongems for rubyand so onbut the simplicity of npm has had a major role in nodes successecosystemnode js is not limited to building web applicationswith electron for exampleyou can build native desktop applications with nodewe even have a tutorial on how to hereutilities and build systems are very popular candidates with node jsbower is a popular front-end package manager built with nodewhile gulpgruntand webpack are task runners and build systems built with node that can improve workflows and increase developer efficiencydue to the small footprint and low resource requirements for running node js applicationsnode js is leading the charge in serverless computing with platforms like webtaskaws lambdaand google cloud functions all supporting node js almost exclusivelyis node js for methe age-old debate and probably most difficult question to answerit dependsthis may seem like a cop-out answerbut it really dependshere at auth0we use node js extensively and its proven its worth in helping us scalecheck out our stories from the trenches blog for more in-depth coverage on how we make use of various technologies throughout our organizationnode js is great for many use casesbut not so good in othersif you need high i/o that doesnt require a lot of computationsuch as serving assets or webpagesthen node will keep you satisfiedif you are doing complex operationssuch as hashing passwords or running simulationsnode js will likely underperformexamine your use case carefully and use the right tool for the jobnode js excels in many use casesbut is not a silver bullet for everythingtweet this building an application with node jsnow that we know more about node jswe are ready to get coding and building our applicationthe application well be building today is called awesome pollsthe united states just had its presidential elections and we are going to build an app to help us analyse the resultsimagine youre building this app for a news organization that wishes to have the most up to date data so that it can provide accurate reports to its viewersfor this tutorial we will assume that you have at least some javascript and general programming knowledgebut no node js exposureso well take it step by stepas alwaysyou can check out the completed code from the github repo if you would like to just follow alonginstalling node js and npmto install node js head over to the official website located at https//nodejsorg and youll see a giant green download button for your operating systemll be running the 6x lts version of nodesimply download the executablerun itand go through the steps to install node js on your systemif you are on a macyou can also install node js and npm via homebrewsimply run homebrew install node from your terminal and in seconds node and npm will be installedwe will want to ensure that both npm and node are installedonce youve gone through the installation stepseither manually or via homebrewll confirm that the installation was successfulto do thatclose and re-open your terminal window and run the command node -vthis command will let you know the current version of node installednextrun npm -vand likewise you should see the version of the node package manager installed on your systemnotenodejs has two versionsa 6x stable/long-term support version and 7x which is the cutting edge versionthat supports some of the latest es6 featuresboth versions are production-readyand for this tutorial well be using the 6x version of nodenode js project setupnow that we have node and npm installedwe are ready to continueone of the best things about node applicationsfor me personallyis the ability to have your application live anywhere in the file systemeach node application is self-containedso to setup our projects just create a directory on our desktop called awesome-polls and well place our entire application in this directorythe first file well add to this project is a packagejson filethis file will keep track of all of our dependencies as well as provide some useful info about our applicationyou can either manually create this fileor run the command npm init and walk through the step-by-step processremember to navigate to the awesome-polls directory in your terminal firstotherwise your packagejson file will be created elsewherenow that we have our packagewe can add and save our dependenciesthere are multiple ways to do thiswe could manually write our dependencies in the packagefor examplebut the preferred way is to actually run the npm install command and pass the --save flag which will automatically add the dependency to your packages see how this workswe will use the express javascript web framework for building our applicationcurrentlywe dont have express installed on our machineto get itsimply run npm install express --savein just a few secondsexpress will be downloadedand stored in a new directory in your file system called node_modulesthis directory will be located in your awesome-polls directory and is a local dependencyyou can also install global dependencies by passing a -g flagbut you probably wont want to do this for the majority of libraries you installutilities such as webpackd install globallyyou can also install multiple dependencies at onces install the rest of our dependencieswrite the followingnpm install body-parser connect-ensure-login cookie-parser debug dotenv express-session jade morgan passport passport-auth0 --savethese are all of the 3rd party open source libraries we will rely on to write our applicationits ok if you dont understand what many of these mean or do just yetll get thereto close out this sectiontake a look at your packagejson file and youll see that there is a new section now called dependencies with the libraries weve includednode js directory structurenode js and express js are both pretty unopionionated when it comes to directory structureyou are free to define your own and wont be penalized for having too many or too few layers of abstractionat the end of the daythe code is compiled and code structure flattenedso feel free to experiment with what works best for youthis will also depend a lot on the size and scope of your applicationour demo app is fairly small so our structure will look likeenv - // we will store our global environmental variables herepackagejson - // we will define our apps external dependencies hereappjs - // this file will be our entry point into the application- node_modules - // automatically generatednpm will store our external dependencies here- public- stylesheets- stylecss - // well store our global styles here- routes- indexjs - // in this file well define our routes for the application- views - // well place all of our ui views here- errorjade - // our view for the errorjade - // our main homepage viewour directory structure is fairly simplell build our app in an mvc style fashionour views directory will hold all of our front-end viewswhile the routes directory will handle the traditional controller logicwe wont have any models for this simple applicationagains ok if some of these files dont make sense just yetill explain them all in detail shortlybuilding awesome pollslets write some node js codethe first piece of functionality that we will implement is our main entry point into the applicationopen up the appjs fileor create it if you havent alreadyfor nows add the following// we saw how we could download dependencies via npmto use those dependencies in our code we require themthe syntax to require a library is the keyword require and a string for the name of the librarywe assign this require function to a variable and can then access methods from the library through that variablehere we are requiring all of our dependencies at the top of the page as is good practicevar express = requireexpressvar path = requirepathvar logger = requiremorganvar cookieparser = requirecookie-parservar bodyparser = requirebody-parservar session = requireexpress-sessionvar dotenv = requiredotenv// we are using the dotenv library to load our environmental variables from theenv filet have anything in theenv file for nowbut we will soonload// just like external librarieswe can import our application code using the require functionthe major difference is that we have to give the exact path to our filewe saw in the directory structure section that we will have an indexjs file in a routes directorygo ahead and create it if you havenotherwise youll get errors when compiling the codevar routes = require/routes/index// this line of code instantiates the express js frameworkwe assign it to a variable called appand will add our configruation to this variablevar app = express// theset method allows us to configure various options with the express frameworkhere we are setting our views directory as well as telling express that our templating engine will be jademore on that soonappsetviewsjoin__dirnameview enginejadeuse method is similar to theset methodwhere it allows us to set further configurationstheuse method also acts as a chain of events that will take place once a request hits our node js applicationfirst well log the request dataparse any incoming datauseloggerdevbodyparserjsonurlencoded{ extendedfalse }cookieparsersession{ // here we are creating a unique session identifier secretshhhhhhhhhresavetruesaveuninitializedtrue}staticpublic// catch 404 and forward to error handlerfunctionreqres{ var err = new errornot founderrstatus = 404}// if our applicatione encounters an errorll display the error and stacktrace accordingly{ resstatus500send// finallyll choose to have our app listen on port 3000this means that once we launch our appll be able to navigate to localhost3000 and see our app in actionyou are free to choose any port you wantso 8080or 80or really any number will workthe reason 3000 is typically used is because its the lowest port number that can be used without requiring elevated privileges on mac/linux systemslisten3000s test our app so farto run our appll simply run the command node app in your terminalnavigate to locahost3000 in your web browserif all went as expectedyou should just see a 404 page not found errorthat is the expected behavior since we did not add any routes to our applicationbut we did add a page not found error handlers add some routesexpress js routingif you followed along with our directory structurell have created an indexjs file in a directory titled routesif you havent already done sogo ahead and create this fileand open itwe will define our application routes hereto accomplish thisll write the following// again we are importing the libraries we are going to usevar express = requirevar router = expressrouter// on our router variablell be able to include various methodsfor our app well only make use of get requestsso the method routerget will handle that interactionthis method takes a string as its first parameter and that is the url pathso for the first route we are just giving it/which means the default routenext we are defining a node js callback functionthat takes three parametersa requesta responseand an optional nextparameterfinallyin our callback functionwe are just send the messageyou are on the homepageget// we are going to do the same thing for the remaining routes/loginyou are on the login page/logoutyou are on the logout page/pollsyou are on the polls page/useryou are on the user pagewe export this module so that we can import it in our appjs file and gain access to the routes we definedmoduleexports = routerbefore moving onlet me briefly explain how routing in express workswhen we define a routesay our /user routeand pass the callback functionwe are telling express that when the browser points to localhost3000/userthe specified callback function will be calledthe req parameter will have all the details of the request such as the ip addressparameters passed with the routeand even items we attach to it through express middlewarethe res parameter handles our response from the server to the browserhere we can return a viewan errorjson datawe can optionally add a next parametercalling next will exit the current function and move down the middleware stackthe way requests are processed in express js is that they go through a stack of functionsat the end of each functionyou can either call next to go the next function in the stackor call res and send a response to the browseronce an appropriate res method has been calledthe execution of that request is stoppedmiddleware is a great way to separate our code into logical pieceswe can have middleware that transforms our request or checks to see if a user is logged in before continuingll see how to do that in the next sections get back to our routeswe have defined thembut if we run our application and try to access localhost3000/login for examplell still see the 404 errorwe havent linked our routes to our apps do that nextopen the appjs file and well make the following changes// we have commented out the existing code so that you can see where to add the new code//do not comment out anything in your file// var express = require// var path = require// var logger = require// var cookieparser = require// var bodyparser = require// var session = require// var dotenv = require// dotenvthe major difference iswe have to give the exact path to our file// var app = express// app{// secret// resave// saveuninitializedtrue// }// here we are going to use add our routes in a use statement which will link the routes we defined to our approutes{// var err = new error// err// next// }{// res// resrendererror{// messagemessage// errorerr// }with this change savedrestart your node server and now navigate to localhost3000/users and you should just see the textyour are on the users pagedisplayedif we go to a route that we havent defined like localhost3000/yoll get the 404 page like wed expectalrightso far so goodwe have our routes workingnext lets go ahead and build our ui viewsbuilding the uinexts build our viewsnode js and express are very extensible and we have a lot of choices and options when choosing a templating engine for our applicationin this tutorial we will use jaderecently renamed to pugjade is perhaps one of the oldest view enginesbut other options such as ejsmustachedustand so on existin our appwe already declared that our view engine is going to be jadeand that our views will be stored in a directory titled viewsin this tutorialt go over the jade/pug syntaxso if you are unfamiliarplease check out the official tutorialwe are going to build five unique viewsjade/pug allows us to extend one layout and build on top of itso we are going to do that in this simple applications create a file named layoutour views will extend this layout and add on their unique propertiesthe contents of this file will be as followsdoctype htmlhtml head metacharset=utf-8title= title linkrel=stylesheethref=/stylesheets/stylecsslinkhttps//maxcdnbootstrapcdncom/bootstrap/337/css/bootstrapmincom/font-awesome/470/css/font-awesomescriptsrc=//cdnauth0com/js/lock/103/lockbody block contentnexts build our homepageour homepage will just display the name of our app and present the user a link to log increate a file called indexjade and paste in the followingextends layoutblock content h1 ifafa-lgfa-pie-chart span awesome polls h2 welcome to the awesome polls administrator websitep to access the pollsplease loginbr abuttonloginfor our next pages build the user details pagethis is where well display the logged in users informationcreate a userjade file and the implemenation is as followsextends layoutblock content img#{userpicture}h2 welcome #{usernickname}logoutwith the users page dones build the polls pagecreate a file called pollsextends layoutblock content divclearfix divpull-left ifa-pie-chart span awesome polls divpull-right imgstyle=height24pxborder-radius30pxstrongmargin0px 10pxnickname} alogout br divjumbotron h1text-center 2016 presidential election each pollindex in polls ifpollestimateslength >0divcol-sm-4 divclass=panel panel-defaultmin-height150pxpanel-title divpanel-heading=pollshort_title divpanel-body100pxul-unstyled each personindex in pollestimates li if index == 0 p strong #{personchoice} divprogress divprogress-bar progress-bar-successwidth#{personvalue}%role=progressbarspan=personvalue else p span #{personprogress-bar progress-bar-infovalue divpanel-footer abtnbtn-sm view results abtn-smwrite-report write reportnexts pretty up our error pagell create a file called errorjade and paste the following codeextends layoutblock content h1= message h2= errorstatus pre #{errorstack}lastlyll also create a stub for our login page by creating a file called loginbut well leave it blank for nowwiring up our views and controllersfinallywe are ready to wire up our views and controllers with actual functionalityrememberwe are storing our controllers in the routes/indexs open up that file and make the following adjustmentsvar passport = requirepassportvar ensureloggedin = requireconnect-ensure-loginensureloggedinvar request = requirerequest// we are going to want to share some data between our server and uill be sure to pass that data in an env variablevar env = {}{ // nowrather then just sending the textwe are going to actually render the view we created using the resrender methodthe second argument will allow us to pass in data from the backend to our view dynamicallyindex{ envenv }{ // same thing for the login pagelogin{ // for the logout paget need to render a pagewe just want the user to be logged out when they hit this pagell use the expressjs built in logout methodand then well redirect the user back to the homepagelogoutredirect{ // you may have noticed that we included two new require filesone of them being requestrequest allows us to easily make http requestsin our instance herewe are using the huffington posts api to pull the latest election resultsre sending that data to our polls view// the second require was the connect-ensure-loggedin libraryand from here we just required a method called ensureloggedinwhich will check and see if the current user is logged in before rendering the pageif they are notthey will be redirected to the login pagewe are doing this in a middleware patternwe first call the ensureloggedin methodwait for the result of that actionand finally execute our /polls controllerhttp//electionshuffingtonpostcom/pollster/api/chartstopic=2016-presidentresponsebody{ iferror &&statuscode == 200{ var polls = jsonparse// for this viewwe are not only sending our environmental informationbut the polls and user information as wellpolls{envenvuserpolls}} else { res} }{ // same thing for our resuser }this completes our controllers implementationwe did a lot in this sectionwe saw how we could send data between our server and front endhow to use the excellent node js request library to make calls to an external apiand also how to secure our routes and prevent unauthorized accesst built the user authentication system just yetll do that nextbefore we close out this sections make one quick change to our appif you recalljs file we built our error handlerin the last sectionwe created a pretty view for our errorsso lets make sure were using that view//res{ messageerr }asidenode js authentication with auth0we set up a great foundation and our app is looking goodthe final piece of the puzzle is to get authentication up and runningso users can log in and view the pollsll use auth0 to accomplish thisbut the passportjs library has strategies for all major authentication frameworks and providersto get startedll need an auth0 accountif you dont already have oneyou can sign up for a free account hereonce you have an accountlog in and navigate to the management dashboard and retrieve your auth0 app specific keysthe three items youll need specifically areclient idclient secretand domainonce you have these three itemsgo ahead and open up theenv file we created and create a variable for each of theseyour completedenv file should look like thisauth0_client_id=mrgds6qiasvbpurqgwlg37anunbh2opdauth0_domain=your-auth0-domaincomauth0_client_secret=cvqnk5uufc8qtyf0mw7rkt5teocgkxo9vs1baxor1zdydpvnxr0f5bbcjd32dmsnexthere we will create our auth0 authentication strategycheck out the changes belowwe are including the original code weve written so farbut have commented it out so you can see the changes we are adding// passport is the most popular node js authentication libraryvar passport = require// we are including the auth0 authentication strategy for passportvar auth0strategy = requirepassport-auth0//var routes = require// this will configure passport to use auth0var strategy = new auth0strategy{ domainprocessauth0_domainclientidauth0_client_idclientsecretauth0_client_secretcallbackurl//localhost3000/callbackaccesstokenrefreshtokenextraparamsprofiledone{ // accesstoken is the token to call auth0 apinot needed in the most cases// extraparamsid_token has the json web token // profile has all the information from the user return donenull// here we are adding the auth0 strategy to our passport frameworkpassportstrategy// the searlize and deserialize user methods will allow us to get the user data once they are logged inserializeuser{ donedeserializeuser// we are also adding passport to our middleware flowappinitialize//appwe have created an auth0 authentication strategy and registered it with our applicationwe already have a login routebut we havent implemented the uiopen up the loginjade file and add the following codeid=root280px40px autopadding10pxvar lock = new auth0lock#{envauth0_client_id}auth0_domain}{ auth{ redirecturlauth0_callback_url}responsetypecodeparams{ scopeopenid name email picture} }}lockshowwe will make use of the auth0 lock widget for our authentication flowlock allows us to easily and effortlessly add a login box that can handle traditional username and passwordsocialand enterprise login methods as well as additional features like multifactor authentication all with the flip of a switchyou may have noticed in the loginjade filewe are requiring some data from the env variablebut we are currently not passing those specific variabless fix thatopen up the indexjs page in the routes directory and lets make some final adjustments here as well//var env = { auth0_client_idauth0_callback_url// we are also going to implement the callback route which will redirect the logged in user to the polls page if authentication succeeds/callbackauthenticate{ failureredirect{ resreturntowe are now finally ready to test our applicationrestart the server and head over to localhostyou will be greeted with the homepageclick on the login buttonand you will be sent to the /login pagewhere the lock widget will be opened and you will be able to sign up or log inlog inor sign up if you havent alrady created a test accountand you will be redirected to the /polls pageon this pageyou will be able to see the results of all 50 stateswe got this data using the node js request library and querying the huffington post apiclick on the logout link in the top right cornerand your user will be logged out and sent back to the homepagenow that you are logged outtry accessing the /polls page and notice that since you are no longer logged inyou are redirected to the /login pagecongratsyou just built an entire node js app and added authentication to itconclusionnode js is a powerful language and framework for building modern applicationsthe community support through npm is unrivaled and auth0 can help secure your node js apps with not just state of the art authenticationbut enhanced features like multifactor authanomaly detectionenterprise federationsingle sign onssosign up today so you can focus on building features unique to your appwith auth0you can add authentication to your node js app in minutestweet this", "image" : "https://cdn.auth0.com/blog/nodejs-awesome-polls/nodejs_logo.png", "date" : "November 21, 2016" } , { "title" : "Introducing the Auth0 Ambassador Program!", "description" : "Introducing our new mentorship program that will help you master developer evangelism skills.", "author_name" : "Kunal Batra", "author_avatar" : "https://s.gravatar.com/avatar/5bdf34cb56195a699562bb1468013154.png", "author_url" : "https://www.twitter.com/kunal732", "tags" : "Ambassador", "url" : "/announcing-auth0-ambassador-program/", "keyword" : "tldrapply to the ambassador programour new initiative to help you master developer evangelism skills such as community buildingtechnical content creationpublic speaking and morebackgroundhere at auth0our core mission is to make the internet saferwe do this by serving the developer community and giving them the tools they need for their applications to be secure and successfulwe strongly believe that we cannot be successful in making the internet saferwithout an incredibly valuable and engaged group of people helping usthats why we want to recognize developers who want to have a real impact within the communitywhat is it like to be an auth0 ambassadoras an auth0 ambassadoryoull gain the support of our evangelism teamwell help you build up your brand and master the evangelism skillset bygiving you opportunities to mentor developers at startup acceleratorshackathons &conferenceshelping you organize local meetups and providing swag and sponsorshipbuilding up your technical speaking and writing skillshelping you find and create open source projects that you can get involved inshowcasing the awesome hacks you build ontop of our apis&moreeverything is on a voluntary basisyou can do as much or as little as your schedule permitshoweverthe more you dothe more rewards and perks you getwho can applyour passion is to make developerslives more awesomere looking for developers who feel the same wayyou naturally get satisfaction and enjoyment when you teach or solve problems for othersto see our other criteria check out our ambassador program pagerewards and perks of being and auth0 ambassadorwhile this is not a paid positionour evangelism team set up the program so the more impact you makethe more perks you will getfor a detailedvisit our ambassador pagesome of the perks arell pay for travel and lodging to any of your accepted talks that cover auth0 or identity/securityhave auth0 sponsor local developer meetups of your choicetickets to participate in experiences such as startupbus or similar eventsexclusive access to auth0 gear that identifies you as part of the ambassador programcheck the website for moreget started nowapply todayif you have additional questionsemail community@auth0com", "image" : "https://cdn.auth0.com/blog/ambassador-program/logo.png", "date" : "November 17, 2016" } , { "title" : "A Rundown of JavaScript 2015 features", "description" : "Take a look at the features from ECMAScript/JavaScript 2015 and learn how they can help you in your projects", "author_name" : "Sebastián Peyrott", "author_avatar" : "https://en.gravatar.com/userimage/92476393/001c9ddc5ceb9829b6aaf24f5d28502a.png?size=200", "author_url" : "https://twitter.com/speyrott?lang=en", "tags" : "javascript", "url" : "/a-rundown-of-es6-features/", "keyword" : "in this article we will go over the new features of javascript/ecmascript 2015a major update to the languagewe will make special emphasis on how these features can help in the development of ever bigger systems and how they compare to the old way of doing thingswe will also show you how to set up a modern project with ecmascript 2015 plus async/await supportread onecmascript 2015 is javascript for bigger projectstweet this this rundown is based on the excellent work of luke hoban and his es6features github repositoryanother great resource for those of you wishing to learn more is the mozilla developer networkof courseacknowledgements would not be complete without a reference to drrauschmayers blog where you can find in-depth looks at ecmascript 2015introductionafter years of slow developmentjavascript has seen a rebirthnodejs and newer frontend frameworks and libraries have renewed the enthusiasm behind the languageits use for medium and big systems has put people thinking hard on how javascript needs to growthe result of this is ecmascript 2015a big update to the language that brings many ideas that had been in the works for a long timelets see how these ideas help to make javascript a better language for all uses todayecmascript 2015 featureslet and constsince its inceptionjavascript had one way of declaring variablesvarthe var statementhoweverobeys the rules of variable hoistingin other wordsvar declarations act as if the variables are declared at the top of the current execution contextfunctionthis may result in unintuitive behaviorfunction test{ // intended to write to a global variable namedfoofoo = 2// a lot of code goes here forvar i = 0i <5++i{ // this declaration is moved to the topcausing the first // write toto act on the local variable rather than a // global onevar foo = i}}testconsolelog//should print 2 but results in an exceptionfor big codebases variable hoisting can result in unexpected and sometimes suprising behaviorin particularvariable declarations in many other popular languages are restricted to the lexical scope of the enclosing blockso newcomers to javascript may completely ignore the semantics of varecmascript 2015 introduces two new ways of declaring variableslet and constthe behavior of these statements is much more in line with what other languages doletthe let statement works exactly as the var statement but with a big differencelet declarations are restricted to the enclosing scope and are only available from the point where the statement is located onwardsvariables declared inside a for loopor simply inside enclosing bracketsare only valid inside that blockand only after that let statementthis behavior is much more intuitiveusing let is encouraged in place of var in most casesconstthe notion of const is a bit more complexall declarations in javascript are rebindablea variable declaration establishes a connection between a name and a javascript object or primitivethis same name may later be rebound to a different object or primitivevar foo = 3//foo is bound to the primitive 3foo = [abcdef]// foo is now bound to an array objectthe const statementin contrast to the let and var statementsdoes not allow rebinding the name to a different object after the initial declarationconst foo = 3// typeerror exceptionit is important to note that const does not affect writability in any waythis is in contrast to the notion of const from languages such as c and c++arguablythe choice of const as a name may have not been a good ideawritability can be controlled using objectdefinepropertyand objectfreezeand has nothing to do with the const statementdo remember writing to read-only properties in non-strict mode is silently ignoredstrict-mode reports these errors as a typeerror exceptionplacing stricter requirements on the way certain bindings can be manipulated can prevent coding mistakesin this senseboth let and const help greatlyarrow functions and lexical thisjavascriptby virtue of being a multi-paradigm languagemakes use of many functional featuresof these features closures and anonymous functions are essentialarrow functions introduce a newshorter syntax for declaring thems see// before es2015[1234]foreachelementidx{ console}// after es2015arrow functions[1=>at first this may seem like little improvementarrow functions behave differently when it comes to thisargumentssuperand newtargetall of these are local predefined declarations inside the scope of a functionarrow functionsrather than declare their own version of these elementsinherit the values from the enclosing functionthis prevents mistakes and unclutters certain common coding patternsfunction counter{ thiscount = 20setintervalfunction callback{ ++thiscount// bugthis points to the global object // or is undefinedin strict mode1000}const counter = new counterit is very easy to make a mistake like thisthe old way of fixing this was rather cumbersome{ // we will use this whenever we require a reference to this // inside a local functionvar that = thisthis{ ++thatwith ecmascript 2015 things are simpler and obvious{ // this is bound to the enclosing scopes this value++thisjavascript classessince its inception javascript has supported object-oriented programminghowever the form of oop implemented by javascript was not entirely familiar for many developersespecially those coming from the java and c++ family of languagesthese two languagesand many othersimplement objects in the spirit of simula 67javascriptimplements objects in the spirit of selfthis model of oop is known as prototype based programmingprototype-based programming can be unintuitive for developers coming from other object modelsthis has resulted in many javascript libraries coming up with their own way of using objectsthese ways are sometimes incompatibleprototype-based programming is powerful enough to model a class-based programming modeland library writers have come up with many ways of doing sothe lack of consensus on the way of doing this has caused fragmentation and coupling problems between librariesecmascript 2015 attempts to fix this by providing a common way of doing class-based programming on top of prototypesthis has resulted in some controversy in the community as many view the prototype based approach as superiorclasses in ecmascript 2015 are syntactic sugar for modeling classes on top of prototypesclass vehicle { constructormaxspeed{ thismaxspeed = maxspeed} get maxspeed{ return maxspeed}}class car extends vehicle { constructor{ superwheelcount = 4}}which in a prototype based approach could look likefunction vehicle}vehicleprototypemaxspeed = function{ return this}function car{ vehiclecall}carprototype = new vehiclethe exact steps taken by the javascript interpreter to translate classes to a prototype chain are available in the javascript specificationthe actual usefulness of classes compared to lean prototypes for big projects is a matter of active discussionsome people argue that class based designs are harder to extend as the codebase growsorto paraphrasethat class-based designs require more forethoughtclass proponentson the other handargue that classes are more easily understood by developers coming from other languages and tried and proved designs are readily available as proof of their usefulnessone of the design objectives of selfthe language that inspired javascripts prototypeswas to avoid the problems of simula-style objectsthe dichotomy between classes and instances was seen as the cause for many of the inherent problems in simulas approachit was argued that as classes provided a certain archetype for object instancesas the code evolved and grew biggerit was harder and harder to adapt those base classes to unexpected new requirementsby making instances the archetypes from which new objects could be constructedthis limitation was to be removedthus the concept of prototypesan instance that fills in the gaps of a new instance by providing its own behaviorif a prototype is deemed inapropiate for a new objectit can simply be cloned and modified without affecting all other child instancesthis is arguably harder to do in a class-based approachiemodify base classeswhatever your thoughts on the matterone thing is clearif you prefer to stick to a class-based approachthere is now one officially sanctioned way of doing sootherwiseuse prototypes to your hearts contentjavascript object-literal improvementsanother feature born out of practicality are the improvements to object literal declarationstake a lookfunction getkey{ returnsome key}let obj = { // prototypes can be set this way __proto__prototypeobject// key === valueshorthand for someobjectsomeobject someobject// methods can now be defined this way method{ return 3// dynamic values for keys [getkeysome valuefor contrastthe old way of doing things would require something likelet obj = { someobjectsomeobjectmethod}}objprototype = prototypeobjectobj[getkey] =anything that aids in readability and keeps blocks of code that should belong together as close as possible helps to reduce the chances of making a mistakejavascript template string literalsthere comes a time in every project in which you will need to interpolate values into a stringthe standard way of doing this in javascript was through repeated concatenationsvar str =the result of operation+ op +is+ somenumbernot very prettyor maintainable for that matterimagine a much longer string with more valuesthings can get out of hand rather quicklyfor this reason libraries such as sprintfinspired by cs sprintf functionwere createdvar str = sprintfthe result of operation %s is %sopsomenumbermuch betterbut very much like cs sprintfperfect correlation between the format string and the values passed to sprintf is requiredremove an argument from the call and now you have a bugecmascript 2015 brings a much better solution to the tableconst str = `the result of operation ${op} is ${somenumber}`simple and harder to breakan additional feature of these new string literals is multiline supportconst str = `this is a very long stringwe have broken it into multiple lines to make it easier to read`other additions with regards to strings are raw strings and tag functionsraw strings can help to prevent mistakes related to escape sequences and quote charactersstringraw`hiu000a//the unicode escape sequence is not processedthe syntax may look odd if you dont grok string tags yetfunction tagstringsvalues{ consolestrings[0]//hellostrings[1]worldstrings[2]values[0]// 1 consolevalues[1]somethingreturnthis is the returned stringit neednt use the arguments}const foo = 1const bar =tag`hello ${a} world ${b}`tag functions are essentially functions that transform string literals in arbitrary waysas you can imaginethey can be abused in ways that impair readabilityso use them with careecmascript 2015 promisesone of the biggest features in ecmascript 2015promises attempt to bring some sanity to the asynchronous nature of javascriptif you are a seasoned javascript developer you know callbacks and closures rule the dayyou do knowas wellthey are pretty flexiblethat means everyone gets to choose how to use themand in a dynamic language noone will hold your hand if mix two callback conventions unexpectedlyheres what javascript looked like without promisesvar updatestatement =function apidosomethingwiththis{ var url =https//somecoolbackendcom/api/justdoithttplibrequesturlresult{ try { databaseupdateupdatestatementparseresulterr{ loggererrorhelp+ errapirollbacksomething} catch{ loggerexception+ etostring} }{ logger+ error +from+ url +}this is deceptively simplewhydeceptivelybecause it is actually a minefield for future codersor yourselfs go through it step by stepwhat we see first is updatestatementpresumablythis variable contains a statement or command in a database specific languageit could say something liketake this value and update the database in the right placebut var does not prevent rebinding updatestatement to something else laterso if by chance someone writesfunction buggedfunction{ // rebinds the global updatestatementupdatestatement =some function local update statement}rather thanfunction buggedfunction{ // shadows the global updatestatement var updatestatement =}what you get isa bugbut this has nothing to do with promisess move on{ try { database{ logger{ loggertake a closer look at this codeyou can see here two types of callbacksone nested in the otherwith different conventions regarding how to handle errors and how to pass the results of a successful callinconsistency is a big factor when it comes to dumb mistakesnot only thatthe way they are nested prevents the exception handler from being the sole point of failure in the blockso apirollbacksomething needs to be called twice with the exact same argumentsthis is particularly dangerouswhat if someone changes the code in the future to add a new failing branchwill he or she remember to do the rollbackwill he or she even see itlastlythe logger is also called multiple times just to show the current errorand the argument passed to it is constructed using string concatenationanother source of dumb mistakesthis function leaves the door open to many bugss see how ecmascript 2015 can help us prevent them// this wont get rebound in the futureplus strings are constantso this// is assured to never changeconst updatestatement ={ const url =thenresult =>{ // databaseupdate returns a promise as well return databasecatcherror =>`error${error}from url${url}// our api is such that rollbacks are considered no-ops in case // the original request did not succeedso it is ok to call it here}this is beautifulall of the conflict points outlined before are neutralized by ecmascript 2015it is much harder to make mistakes when presented with code like thisand it is much simpler to readwin-winif you are asking yourself why we return the result from databaseupdate it is because promises can be chaineda promise can take the result of the next promise in the chain in case it succeedsor it can perform the right action in case of failures see how that works in the example abovethe first promise is the one created by httplibthis is our outtermost promise and will be the one that tells us if everything went well or something failedto do something in any of those caseswe can use then or catchit is not necessary to call any of these functionsyou can call oneyou can call bothas we do aboveor you can disregard the results completelynowinside any of these handlers two things can happenyou can do something with the data passed to your functioneither the result or the errorand return a valuea promise or nothingyou can throw an exceptionin case an exception is thrownboth then and catch know how to handle thatas an error conditionthe next catch in the chain will get the exceptionin our casethe outtermost catch gets all errorsboth those generated by the httplibrequest promise and those generated inside thenit is important to note what happens with exceptions thrown inside the outtermost catchthey are stored inside the promise for a future call to catch or thenif no call is performedas happens in the example aboveit will get ignoredfortunatelyapirollbacksomething does not throw any exceptionsfunctions then and catch always return promiseseven when there are no more promises in the chainthat means you can call then or catch after any call to these functions againthis is why it is said promises can bechainedwhen everything is doneany further calls to then or catch execute the callback passed to them immediatelyit is important to note that chaining promises is usually the right thing to doin the example abovewe could have ommitted the return statement in from of databasethe code would have worked the same in case no errors were caused by the database operationthe code would behave differently if an error were to occurif the database operation were to failthe catch block below would not get calledas the promise would not be chained to the outtermost oneso how can you create your own promiseseasy enoughconst p = new promiseresolvereject{ try { const result = actiondatapromises can be chained inside the promise constructor as well{ const url = geturl{ const newurl = parseresultreturn httplibnewurlhere the full power of promises can be seentwo http requests are chained together into a single promisedata resulting from the first request is processed and then used to construct the second requestall errors are handled internally by the promise logicin shortpromises make asynchronous code more readable and reduce the chances of making mistakesthey also end the discussion of how promises should workas before ecmascript 2015 there were competing solutions with their own apiecmascript 2015 generatorsiteratorsiterables and forofanother big feature from ecmascript 2015if you come from python you will get javascript generators right away as they are very similarfunction* counter{ let i = 0whiletrue{ yield i++}}if you are not a python developer then your brain will throw syntaxerror a couple of times while parsing the code from aboves take a look at whats going onthe first thing that looks odd is the asterisk right beside the function keyboardthis is the new way of declaring a generator in ecmascript 2015after that theres yield right inside the functionyield is a new keyword that signals the interpreter to temporarily halt the execution of the generator and return the value passed to itin this caseyield will return whatever value is in irepeated calls to the generator will resume execution from the point of the last yieldpreserving all stateconst gen = countergennextvalue// 0console// 1console// 2if all of this sounds familiar to you it may be because there is a very similar concept in computer science called coroutinebut coroutines have an additional feature when compared to exceptionsthey can accept new data from the outside after each call to yieldin factjavascript supports thisso javascript generators are in fact coroutines{ const reset = yield i++ifreset{ i = 0} }}const gen = counter// 2console// 2howeverall of this may look superfluous at this pointwhy add generatorsin which way can they help to keep code tidier and error freegenerators were added to make it easier to bring the concept of iterators into the languageiterators do come up quite a bit in most projectsso what was going on with iterators before ecmascript 2015welleverybody was doing them their wayfunction arrayiteratorarray{ var i = 0return { next{ // may throw return array[i++]endedi >= arraylength} }}var data = [01var iter = arrayiteratoriter// 2soin a waygenerators attempt to bring a standard way of using iteratorsiterators in javascript are nothing more than a protocolthat isa sanctioned api for creating objects that can be used to iterate over iterablesthe protocol is best described by an example{ return i <{ valuearray[i++]donefalse }{ donetrue }} }}take a special look at the object returned from the arrayiterator functionit describes the protocol required by javascript iteratorsan iterator is an object thatcontains a next function taking no argumentsthe next function returns an object containing either one or two membersif the member done is truethen no other member is presentdone flags whether iteration has completedthe other member shall be value and represent the current iteration valueso any object that adheres to this protocol can be called a javascript iteratorthis is goodhaving an official way of doing this means mixing different libraries wont result in 6 different types of iterators being presentand having to use adapters between them if necessaryconventions and protocols are good for maintainabilitybecause there are less chances of mixing things that look alike but arent the samea thing dangerously easy to do in javascriptsohaving to write iterators this wayalthough simplecan be cumbersomewhat if javascript provided a way to create these objects easilythese are generatorsgenerator functions in fact return iteratorsjavascript generators are helpers to create iterators in a more convenient waythe use of generators and the yield keyword helps in making it simpler to understand the way state is managed inside the iteratorfor examplethe example above could be written as simplyfunction* arrayiterator{ forlet i = 0{ yield array[i]}}simpleand much easier to read and understandeven for an inexperienced developercode clarity is crucial for maintainabilitybut we are missing one key piece in the generator and iterators puzzlethere are many things that are iterablecollections are generally iterated overthe way elements are iterated over in a collection changes according to the collection in questionbut the concept of iteration applies nonethelessso ecmascript 2015 provides two more pieces that complete the iterators and generators puzzlethe iterable protocol and forofiterables are objects that provide a convenient interface to construct iterators from themiterables are objects that provide the following keyconst infinitesequence = { value0 [symboliterator]function*{ while{ yield value++} }}symboliterator and the symbol object are new in ecmascript 2015so this looks very oddwe will go over symbol later on in this guidebut for now think of it as a way to create unique identifierssymbolsthat can be used to index other objectsanother odd thing here is the literal object syntaxwe are using [symboliterator] inside an object literal to set its keyweve gone over this extension of object literals abovethis is no different from the example we presented therelet obj = { //}soiterables are objects that provide a symboliterator key whose value is a generator functionso now we have a new key inside objects that can be iterated overdo we need to explicitly get the generator from them everytime we want to iterate over the elements managed by themthe answer is noseeing this is quite a common patterniterating over elements managed by a containerjavascript now provides a new version of the for control structureforlet num of infinitesequence{ ifnum >{ break} consolenum}yesall iterable objects can be easily iterated over with the use of the new forof loopand the good thing about forof is that existing collections have been adapted for use with itarrays and the new collectionsmapsetweakmapcan all be used this wayconst array = [1// we will talk about map later in this articleconst map = new map[[key11][key22]key33]]let elem of arrayelem}forlet [keyvalue] of map`${key}${value}`}note the odd syntax in the last forvalue]this is called destructuring and is another new feature of ecmascript 2015we will talk about it laterconsistency and simplicity can do wonders for readability and maintainabilityand this is exactly what iteratorsiterablesgenerators and the forof loop bring to the tablefunctionsdefault arguments and the rest operatorfunctions now support default argumentssimplifying the common pattern of checking whether an argument exists and then setting its valuefunction requestmethod =get{ //}as the number of arguments growsdefault arguments simplify the flow of the checks required at the start of the functionand simplicity is good when coding{ // picture repeating this for every default argument without ecmascript 2015 // yikestypeof method ===undefined{ method =}}default arguments also work with the undefined valuewhen passing undefined to a default argumentthe argument will take its default value insteaddata = {}contenttype =application/json}request//my-apicom/endpoint{ hellodoes not preclude proper api designusers might be tempted to pass the third argument as the second onein particular when using http getalthough this can help to redeuce boilerplate inside functionscare must be taken when picking the right order of arguments and their default valuesthe rest operator is a new operator inspired by the one from cfunction manyargsabargs{ // args === [true]}manyargsjavascript did allow access to arguments not declared in the argumentof a function through argumentsso why use the rest operatorthere are two good reasonsto remove the need to manually find the first argument that is not named in the argumentthis prevents silly off-by-one mistakes that usually happen when arguments are added or removed to a functionto be able to use the variable containing non-declared arguments as a true javascript arraysince its inceptionarguments has always behaved like an array without actually being onein contrastthe variable created with the rest operator is a true arraybringing consistencywhich is always goodsince the variable declared through the rest operator is a true arrayextensions such as caller and calleepresent in argumentsare not availablespread syntaxa way to quickly understand spread syntax is to think of it as the opposite to the rest operatorspread syntax substitutes argument lists with the elements from an arrayor any iterablecd}let arr = [1manyargsarr//manyargsapplynull//old wayless readablespread syntax can be used in places other than function callsthis opens the possibility for interesting applicationsconst firstnumbers = [1const manynumbers = [-2-10firstnumbers67]const arraycopy = [firstnumbers]spread syntax removes one troublesome limitation from past versions of javascriptthe new operator could not be used with applyapply takes a function object as a parameterand new is an operatorit was not possible to do something likeconst nums = [145]function numberlista = a}//numberlistnew numberlistnums//no params passed to numberlistwe can now doconst numlist = new numberlistspread syntax simplifies a number of common patternsand simplicity is always good for readability and maintainabilitydestructuring in javascriptdestructuring is an extension of javascripts syntax that allows for certain interesting ways of transforming a single variable into multiple variables bound to its internalswe have already seen one example of this above}in this casethe variable map is bound to a mapthis data structure conforms to the iterable protocol and provides to values per iterationa keyand an associated value to that keythese two values are returned inside an array of two elementsthe key if the first elementand the value is the second elementwithout destructuringthe above code would look like thislet tuple of map`${tuple[0]}${tuple[1]}`}the ability to map the internal structure of objects to variables using syntax that is identical to the original structure clarifies codes see other exampleslet [ae] = [1// 2that was simple array destructuringwhat about objectsconst obj = { hello[1subobj{ anull }}let { hello{ b } } = obj// worldconsole// nullthis is getting interestinglook at this exampleconst items = [ { idnameiphone 7{ idsamsung galaxy s7google pixel}]let { name } of items}destructuring also works in function argumentsitems{ name }it is possible to pick different names for destructured elements{ namephone }phonefailure to destructure and object correctly will result in variables with undefined valuesdestructuring can be combined with default argumentsanother new feature in ecmascript 2015this simplifies certain common coding patterns{ methoddata }}proper care must be taken with default arguments and destructuring as ecmascript 2015 does not allow the capture of any keys not declared in the destructuring expressionif the object passed as the second argument in the example above had a third keys say key contenttypeit would not be possible to access itexcept by going through argumentswhich would be cumbersome and impair readabilitythis omission will be fixed in ecmascript 2016arrays do possess this ability in ecmascript 2015let arr = [1rest] = arr// rest === [35]array allow skipping items as wellnumber 2 skippedarguablydestructuring is a new way rather than a better way of doing thingsmy personal advice is to keep things simple and readabledo not overuse destructuring when a simple reference to an inner variable can be written as let a = objdestructuring is of particular use whenpickingmultiple elements from objects at different nest levelsreadability can be improvedit is also useful in function arguments and for loops to reduce the number of helper variables neededjavascript modulesone of the most expected features from ecmascript 2015modules put an end to endless discussions regarding the proper way of extending javascript to do what most languages already doseparate code in different places in a convenientportable and performant wayif you are relatively new to programmingit might be hard to see why modularity is such an essential requirement for proper development practicethink of modules as a way to organize code in self-contained units of workthese units define a clear way to interact with other unitsthis separation promotes maintainabilityreadability and allows more people to develop concurrently without stepping on each others toeskeeping things small and simple also helps tremendously in the process of design and implementationas javascript was conceived as a language for the webit has always been associated to html fileshtml files tell browsers to load scripts placed in other files or inlinepreviously loaded scripts can create global objects that are available for future scriptsup to ecmascript 2015 this was the only rudimentary way in which code from different javascript files could communicate with each otherthis resulted in a plethora of different ways of handling thismodulebundlerswere born out of necessity to bring some sanity to this situationjavascript interpreters for other environmentssuch as nodejsadapted solutions such as commonother specifications such as asynchronous module definitionamdalso appearedthe lack of concensus in the community forced the ecmascript working group to take a look at the situtationthe result is ecmascript 2015 modulesto learn more about the differences between commonamd and ecmascript 2015 modules take a look at javascript module systems showdowncommonjs vs amd vs es2015// helloworldjsexport function hello}export function world}export default hellomodule helloworld// mainjsimport { helloworld } fromhelloworldecmascript 2015 add a couple of keywords to the languageimport and exportthe import keyword lets you bring elements from other modules into the current modulethese elements can be renamed during importor can be bulk importedthe export keyword does the opositeit marks elements from the current module as available for importelements imported from other modules can be re-exported// hello and world availableimport * from// helloworld is an object that contains hello and worldimport * as helloworld from// hellofn is hello and worldfn is world in this moduleimport { hello as hellofnworld as worldfn } from// h is the default export from helloworldnamely helloimport h from// no elements are importedbut side-effects from the helloworldjs module// are runthe consolelog statement in it is a side-effectimportan interesting aspect of ecmascript 2015 modules is that the semantics of import allow for either parallel or sequential loading of modulesinterpreters are free to choose what is more appropriatethis is in stark contrast with commonsequentialand amd modulesasynchronouswhy are browsers taking so long to implement modulesif modules are so important for the reasons described abovethen why arent them available yetas of november 2016most major browsers implement most of ecmascript 2015 nativelybut modules are still missingwhat is going onalthough ecmascript 2015 did define modules in a syntaxthe specification makes no mention of how they should be implemented with regards to the weba conforming implementation need only parse javascript files containing import and export statementsit is not necessary to actually do anything with thatthis might look like a big omissionbut it is notas mentioned at the beginning of this sectionjavascript has always been married to html in the webthe ecmascript 2015 specification concerns itself with javascript and javascript onlyit has nothing to do with html and how javascript files are accessedalthough an import statement makes it clear an interpreter should attempt to load a file with a specific nameit says nothing regarding how to get itin the webthis means performing a request to a server with a specific urlfurthermoreecmascript says nothing about the relation of html and javascriptthis is expected to be resolved by the javascript loader standard which attempts to bring forth a loader spec for browsers and standalone interpreters alikehtml is also expected to add the necessary syntax to differentiate javascript modules from otherwise common scriptsa proposed syntax for this is <script type=src=file>static nature of import and exportboth import and export are static in natureeffects from using these keywords must be fully computable before execution of the scriptthis opens up the possibility for static analyzers and module bundlers to do their magicmodule bundlers such as webpack could construct a dependecy tree at packing-time that would be complete and deterministicremoving unneeded dependencies and other optimizations are possible and entirely supported by the specificationthis is a big difference with regards to both commonjs and amdbut static modules do remove some flexibility that is handy in some scenariosunfortunatelythe dynamic loader proposal did not make it into ecmascript 2015it is expected to be added in future versionsa proposal already exists in the form of systemcan we use modules nowyesand you shouldalthough module loading is not implemented in browsers yetcompilers and libraries such as babelwebpack and systemjs have implemented ecmascript 2015 modulesthe benefit of adopting modules early is that they are already part of the specyou know one way or the othermodules are set in stone and wont see major changes in future versions of javascriptusing commonjs or amd today implies taking a step back and adopting solutions that will fade out in the futurenew javascript collectionsalthough javascript has the necessary power to implement many data structuressome of them are better implemented through optimizations only available to the interpreterthe ecmascript 2015 working group decided to tackle this issue and came up with setweakset and weakmapset stores unique objectsobjects can be tested for presence in the setset uses special comparison semanticswhich mostly resemble ===to check for object equalitymap extends set to associate arbitrary values with unique keysmap allows the use of arbitrary unique keysin contrast with common javascript objectswhich only allow strings as keysweakset behaves like a set but does not take ownership of the objects stored in itobjects inside a weakset become invalid after no references to them are available from outside the setweakset only allows objects to be stored in itprimitive values are not allowedweakmap is weak in the keyslike weaksetand strong in the values it storesjavascript has always been lean in the data structures departmentsets and maps are one of the most used data structuresso integrating them in the