Design and development Archives - Blog https://www.creativewebsitestudios.com/blogs Fri, 10 Jun 2022 21:59:37 +0000 en-US hourly 1 https://wordpress.org/?v=6.0 https://www.creativewebsitestudios.com/blogs/wp-content/uploads/2021/03/fav-icon.ico Design and development Archives - Blog https://www.creativewebsitestudios.com/blogs 32 32 Website Designing for Beginners https://www.creativewebsitestudios.com/blogs/website-designing-for-beginners/ https://www.creativewebsitestudios.com/blogs/website-designing-for-beginners/#respond Fri, 10 Jun 2022 21:57:52 +0000 https://www.creativewebsitestudios.com/blogs/?p=253 A design represents something—the history dates back to ancient and classical times. Since the dawn of humanity, the purpose of designing has been simple: to communicate irrespective of the language....

The post Website Designing for Beginners appeared first on Blog.

]]>
A design represents something—the history dates back to ancient and classical times. Since the dawn of humanity, the purpose of designing has been simple: to communicate irrespective of the language. As we entered the new age, communication has become different. In this age of technology, the umbrella of designing has expanded rapidly.

A good design means good communication with the one who is observing it. Website designing is simply a result of changing technology; the core purpose is the same. Website design is for creative individuals, whereas website development is for technical individuals. This article aims to resolve any query regarding web designing.

 

What are graphic designing and its relation with web designing?

 

Since the inception of the digital era, we have always wanted to make communication between us and machines more creative and artistic. Graphic designing is simply a result of changing technology. Graphic design is a field that uses visual elements to represent a solution to a problem. So who is a graphic designer? A graphic designer creates visual elements to communicate effectively and creatively. Examples of graphic designing include Logo design, Package design, Mobile app design, Website design, and much more.

Graphic design is a very vast field that has many branches. Website design is a subset of graphics designing.

What is Website Designing?

 

You must be wondering what website designing is? But before this, let’s answer what a website is? A website is simply a group of web pages falling under a unique name. A webpage is a document on the internet having content. So anyone designing these web pages is called a website designer, and the overall process is called website designing. It is also called web designing.

Web design is the overall process of designing the arrangement of content online. A good website design fulfills its function by conveying the message effectively intended for a particular audience.

We can divide web designing into the following branches:

  • User interface design: Whether creating a custom website design or a generic one, web designers work on the front user interface. The user interface is a layout on which users interact with the website. The more user-friendly your website becomes, the better the interaction is with users.
  • User experience design: In simple terms, a user experience means how users feel when interacting with your website. To know more about design and development.

Tools for website design

There are a ton of tools for website designing. It ultimately depends on your usability and the scope of the project. Generally, a single software can get your job done. The most prominent tools/softwares are:

  • Adobe Xd

Used for UX/UI designing and prototyping.

  • Sketch

It’s a vector graphics editor for the mac operating system.

  • Figma

A web-based user interface designing tool.

  • Adobe Photoshop

Designers use it for Image editing, graphic designing, and digital art creation. Adobe Photoshop is an advanced design tool, so you need some training to master it. Do not worry. There are numerous free tutorials you can find on YouTube.

  • Other tools

Balsamiq, Canva, and Invision are other prominent design tools used while designing a website.

Future and scope of website design

The technological revolution has changed how we work and communicate with each other. And after the pandemic(covid-19), we understood how it could assist us in our problems. Since website designing also evolved with shifting technology, it can be safe to say that website or custom website designing is future-proof.

 

But we must also understand that we also need to be dynamic in our understanding of digital technology. However, many platforms can do custom website design and are accessible. If you are a talented website designer, your future is bright. Changing technology also introduced new website design trends.

Technology has made learning easier for an individual. One can quickly master this skill set if one is interested.

What is Custom website designing

Any website design that does not follow any pattern or template falls under the banner of custom website designing. Many custom website design agencies offer tailor-made solutions to their customers. Such agencies seek talented individuals who have a knack for website designing. According to Researchgate, 94% of the audience’s first impressions are impacted by visuals and design.

When opting for custom website designing, statistics play a vital role. UX/UI research helps you understand your target audience and provide solutions to their needs through design and experience. Numerous tools can help you throughout your website design and development journey. Custom website design agencies can help you make your website designs more innovative and user-friendly.

Creative Website Studio and website design

Creative website studios is a customer-driven, innovation-oriented, and technology-focused company. Creative website studios work hard to implement the latest technology present in the market. Creative website studios focus on client satisfaction when it comes to website designing. Our portfolio and testimonial talk about our hard work and dedication. Creative website studios also provide custom website development, apart from custom website design. To know more about website development, visit our blog.

How to become a website designer?

Growth in technology resulted in many opportunities and development in graphics and website designing. There are tons of website designing courses to help you land your dream job. Many certifications make you go from beginner to pro in a decent time frame. Apart from that, you can visit the following website for your design inspiration:

  • Behance – for showcasing and finding creative work
  • Awwwards – It promotes the best website design.
  • Dribbble – A community for designers, enhancing their skills and finding a job as well.

CSS Nectar, Siteinspire, Designspiration, and Pinterest are other platforms for design inspiration.

Salary of a website designer

As per payscale, the mean salary of a website designer in the USA is 52,296 USD/year. It will help if you have a portfolio that shows your creativity and intelligence in this field.

Conclusion

No matter how good your product is, a bad design makes people lose interest and hence interaction causing loss of potential customers. The same goes for a poor website design. Throughout your design lifecycle of a website, you as a designer or custom website designing company must ensure that you follow the latest visual design trends. One must also make sure that they are correctly using technology.

The post Website Designing for Beginners appeared first on Blog.

]]>
https://www.creativewebsitestudios.com/blogs/website-designing-for-beginners/feed/ 0
Custom Graphic Designs and Their Role in Web Designing  https://www.creativewebsitestudios.com/blogs/custom-graphic-designs-and-their-role-in-web-designing/ https://www.creativewebsitestudios.com/blogs/custom-graphic-designs-and-their-role-in-web-designing/#respond Sat, 05 Feb 2022 01:00:24 +0000 https://www.creativewebsitestudios.com/blogs/?p=217 Custom Graphic Designs and Their Role in Web Designing Your website is one of the most critical aspects of your brand and contributes significantly to your brands online presence. Besides...

The post Custom Graphic Designs and Their Role in Web Designing  appeared first on Blog.

]]>
Custom Graphic Designs and Their Role in Web Designing

Your website is one of the most critical aspects of your brand and contributes significantly to your brands online presence. Besides that, your website is also a place where you nurture leads to get conversions. Therefore, you cannot afford any fails and need to do it right.

Consider these statistics:

  • According to research, first impressions of the audience are 94% design-related
  • The same research also highlights, about 46% of the consumers of a website base their decision of creditability of a website which is set through visual appeal and aesthetics.
  • Adobe highlighted that 38% of the people will stop engaging with the website if the content or web design layout is unattractive.
  • Judgments of a company’s credibility are 75% based on the company’s web design.

When designing your website, these statistics must be one of the top priorities along with your audience. There are endless tools, technologies, and UI, UX graphic designer services available today which can help you create a website design that can cut through the noise.

Importance of Graphic Design on your Website

Importance of Graphic Design on your Website - Creative Website Studios

People are visual beings. Think about it, the number of positive responses to visuals on a website surpasses the number of people who actually read the text. With that said, the more visual content you have on your website, the more will people be enticed to visit it again.

High-quality content must always find a way on the website to attract your business’s potential prospects and keep them engages with the brand. Visual representation assumes a unique position in planning your site. It involves two things the design and functionality.

While that code helps the users to interact with the right pages, decipher the content, and assist in finding the right product, graphic designs help that supports the code makes the content noticeable in terms of the text styles, colors, and images, as well as the navigation of the website. Subsequently, it is essential for a website to have a correct blend of pictures, content, text style, and tiles to make an exclusive web graphic design.

Five Reasons making Graphic Designs Important for Websites

  • It sets the first impression

When your potential customers visit your website, it is important for your website to make a lasting impression on the potential prospects. They will judge your businesses in split seconds, and if your website has an unappealing or outdated look, your audience is likely to have a negative impression of your brand.

Graphic designs and an appealing website layout is an essential aspect of making an impression on the audience. It lays the foundation of how the audience perceives your business and what message it conveys to the audience. The impact you make on your prospects can either make your business or perish it completely to the ground. With that said, a good web design can help you keep your leads on the page.

  • Graphics provide more information

Graphics can enhance the structure, design, or informative content presented on the web page while keeping the users’ attention at bay. If your potential audience is distracted when viewing your website, it means that they weren’t engaged enough, and that’s not a good sign. Bear in mind, the graphics you use on your website must always compliment the culture and the services of the organization, the style it presents, and the purpose it is working for.

Graphics are an essential part of the website as they provide a detailed information compared to the textual information present on the page. Think about it, a simple infographic or animation that lasts no longer than two minutes can provide much more engagement than a static image or the content.

Since the visitors of the website are likely in a rush to know more about your services, graphics are a more subtle way to give more information about your brand that can make an impact on the audience.

  • It aids your business’s SEO strategy

Many web design elements and practices significantly influence how you publish content on the website, ultimately affecting how the Search Engine spiders crawl and index your web page.

This is one thing in making a brand that you cannot miss. One reason for this being that if you mess up the on-page SEO fundamentals of the website, chances are you will be fighting for the spot in the competition from the very inception of your business.

Not just that, it is important to keep in mind that certain web design elements can directly affect the SEO of the website. Web graphic design can be challenging to understand if you are unfamiliar with how it works, but you need to make sure that your code is SEO-friendly.

The best way to ensure web design practices is to find the “best graphic design services near me” and trust them with everything they are doing.

  • It sets the impression for customer services

The audience tends to judge how you will treat them by the look of your website. Your web design gives them an insight on your view towards the audience, making them skeptical of their next action, i.e., to hit the call-to-action button.

To do it right, think of your website as a customer service representative. If your website is bright, modern, and welcoming, your audience is likely to feel more comfortable engaging with you. You’ll set an impression that you are open and welcoming to new people. ‘

On the contrary, if you haven’t updated your website in a while or it has an unappealing site, it can make your business look cold and aloof. People tend to leave a website that doesn’t match their expectations or makes a good impression on them.

Think of your website design as your business’s digital face. If someone walking into your physical business location is impressed with the ambiance, then it should make the same impression when they enter into your digital storefront.

  • It creates consistency

You want your brand to perform greatly in the digital landscape and expect it to generate new leads. You want your brand to make an impression on the audience and expect them to be familiar with the brand so they can convert. If you have a different design layout on every page of your website, it will make it look unprofessional and drive people away from your website.

To do it right, you need to maintain consistency through the website—fonts, styles, color, and layout, every page of your website must speak the same about your brand as the home page of your website. This approach works seamlessly to build brand recognition.

On the other hand, if your website is not consistent, people are likely to leave your website to a website that looks more professional. With consistency in design, you keep your leads longer on the page, ultimately resulting in conversions. Consistency will give your users a feeling of a well-organized website, reinforcing the trust you want to build.

Conclusion

Graphic design is a vital aspect of building the credibility of your website. If you want to get the best results for your business, it is important to invest in creating a website that drives people to learn more about the company. Make sure to work with the professionals to make the right choices when it comes to designing.

The post Custom Graphic Designs and Their Role in Web Designing  appeared first on Blog.

]]>
https://www.creativewebsitestudios.com/blogs/custom-graphic-designs-and-their-role-in-web-designing/feed/ 0
Growing UX Maturity: Finding A UX Champion And Demonstrating ROI (Part 1) https://www.creativewebsitestudios.com/blogs/growing-ux-maturity-finding-a-ux-champion-and-demonstrating-roi-part-1/ https://www.creativewebsitestudios.com/blogs/growing-ux-maturity-finding-a-ux-champion-and-demonstrating-roi-part-1/#respond Fri, 12 Mar 2021 14:00:00 +0000 https://www.creativewebsitestudios.com/blogs/2021/03/12/growing-ux-maturity-finding-a-ux-champion-and-demonstrating-roi-part-1/ UX maturity is the presence and level of sophistication of UX in an organization. Organizational maturity goes beyond the skills of the individuals composing the UX roles on various teams,...

The post Growing UX Maturity: Finding A UX Champion And Demonstrating ROI (Part 1) appeared first on Blog.

]]>
UX maturity is the presence and level of sophistication of UX in an organization. Organizational maturity goes beyond the skills of the individuals composing the UX roles on various teams, to the UX processes, philosophies, and tools underpinning the organization’s product development and business practices. As Chapman and Plewes (2014) state,

“Achieving great UX design is not just a function or talent of individuals, it is an organizational characteristic.”

Knowing this, means we must strive to understand and grow the maturity of UX practice within the organizations and product teams we work with. Simply being good at our own jobs isn’t enough. As UX practitioners, we are advocates and educators of our craft within the organizations we work for or with.

Note: This article is the first in a three-part series covering six tactics UX practitioners and managers can adopt to facilitate the growth of UX maturity at their organization.

Let’s take a quick look at the six tactics we’ll be covering and their relationship to UX maturity:

  1. Finding and utilizing UX Champions
    Beginning stages: the UX champion will plant seeds and open doors for growing UX in an organization.
  2. Demonstrating the ROI/value of UX
    Beginning stages justify more investment, later stages to justify continued investment.
  3. Knowledge sharing/Documenting what UX work has been done
    Less relevant/possible in the earliest stages of maturity when there is little UX being done. Creates a foundation and then serves to maintain institutional knowledge even when individuals leave or change roles.
  4. Mentoring
    Middle and later stages of maturity. Grow individual skills in a two way direction that also exposes more people to UX and improves the knowledge transfer of more senior UX, should lead to a shared understanding of how UX looks and is implemented in the organization.
  5. Education of UX staff on UX tools and specific areas of UX expertise
    All stages of maturity require continued education of UX staff.
  6. Education of non-UX staff on UX principles and processes
    All stages of maturity benefit from education of non-UX staff.

These tactics don’t build on the prior tactics — you can and should implement multiple tactics simultaneously. However, some tactics (e.g. mentoring) might not be possible in an organization with low UX maturity that lacks the support for a mentoring program.

UX is a skill, it can be practiced, grown, and improved. It can also languish and atrophy if not appropriately exercised. This is true for individuals and organizations. An organization’s UX maturity level impacts all aspects of how UX is prioritized and implemented throughout the organization and its products.

If we wish to meaningfully improve our UX practice, it is critical we look for opportunities to help grow the maturity of UX across our organization. We face a larger challenge when it comes to growing UX in a way that has impact across an organization than we do with growing our own UX skills.

In this article, I’ll briefly discuss some of the existing models you can use to provide a framework for thinking about an organization’s UX maturity. I’ll then explore two specific tactics for UX practitioners to make an impact to help grow UX maturity within their organizations when they are in the early stages of UX Maturity.

Defining UX Maturity

We don’t have one agreed upon model of what UX maturity looks like at different stages. Natalie Hanson has a blog post providing a collection and discussion of various UX Maturity models up to the point it was published in 2017.

Chapman and Plewes define five stages of organizational UX Maturity from “Beginning” which is essentially no UX, to “Exceptional” where UX has been fully integrated into the business processes, resources are plentiful, leadership understands the value of UX and how it works, and the organization’s culture is supportive and promotes UX.

Most of us probably work for organizations with some level of UX Maturity, meaning beyond Stage 1 where there are no resources. However, it’s also possible some of us work in organizations at the beginning or awareness stages. If you are in this situation, you might find yourself frustrated with the lack of support and understanding of UX within your organization and product teams. We should push to move our organizations and colleagues further along this UX maturity continuum if we wish for UX to grow as a field, increase opportunities to bring our peers into the fold, and ultimately to provide the best experiences for end users of the products or services our organizations offer.

Frameworks and models are helpful for understanding how researchers and professionals have observed UX maturity growing in organizations. They allow us to understand where we are and where we are headed, if we create a strategy to get there. We need to move beyond theory and into the application of specific tactics if we want to push our organization to grow in UX maturity. I’ll present two tactics for demonstrating the value of UX and documenting progress of UX in an organization that will help grow UX maturity in the section below.

What Can We Do To Grow Our Organization’s UX Maturity: Two Tactics

It can feel frustrating trying to make change in large organizations. Here are some tactics UX practitioners can consider applying to their situation. These two tactics are especially helpful for organizations with less mature UX, and more opportunity to grow:

These tactics are meant to create a broad impact across the organization and plant the seeds of UX in potentially fertile fields. I’ll tie them back to Chapman and Plewes factors composing the stages of UX Maturity as relevant within the discussion of the specific tactic.

Tactic 1: Finding And Utilizing UX Champions

Champions are people who enthusiastically support the growth of an innovation or idea within an organization. Researchers have long found champions are a critical component of overcoming social and political barriers to innovation within organizations. I would argue you cannot move a large organization out of Chapman and Plewes stage 1 without having a set of Champions. Champions do not need to be experts or practitioners of UX. However, we need to identify the correct people, in the right positions of power, who can advocate for UX as a concept, advocate growing UX, and push for UX resources in the form of budget and roles, if we wish to grow UX in organizations with low levels of UX maturity.

Effective champions display the following types of behaviors according to some researchers:

I’d add to these behaviors that champions need to be well educated on the idea or innovation (in this case UX) in order to maximize effectiveness. We are responsible for providing this education through conversation, examples, and providing resources supporting the champion in their learning.

We can tie champions back to Chapman and Plewes factors of Leadership and Culture, as well as potentially the Timing of UX factor:

  • Champions should be able to identify and advocate for the proper time to insert UX into existing process.

Champions usually play this role in an informal capacity. This makes sense when we think about an organization at the fledgling stage of implementing UX — it is unlikely you would immediately go from having little to no UX, to hiring a specific role for championing the cause. Champions therefore are promoting UX in the course of their other everyday activities.

As a UX practitioner, your goal is to find the champions within your organization, educate them on the role and value of UX, provide them with real life examples of how UX is making a difference, and work with them to identify the opportunities to insert UX into other products or processes within an organization.

We need to be purposeful when we look to invest time cultivating a champion. You can answer these questions when looking to identify and work with a champion:

You can pick and choose which of these questions might apply most to the situations you are trying to find a champion, or you could use these questions as filters, start with the largest list of potential champions you can think of, then remove names when they don’t meet the qualifications. Your remaining names are the people you can pursue to become UX champions within your organization.

Case Study: Finding And Utilizing A UX Champion At A Large International Logistics Company

You might think it is a fairly daunting task to quickly identify an effective champion within your organization. This case study will show the opposite can be true. Within one month, I was able to identify UX champions in an organization I’d never worked with. Within three months, the champions had created meaningful change, identified more opportunities than we could handle with the resources we had, and set the course for a bright future for UX within the organization.

A major logistics company serves as the example for this case study. The company had familiarity with UX and CX, even espousing that it was transforming itself into a customer first organization. Unfortunately, these words were not reflected in the UX integration throughout the company.

I would classify the organization at Chapman and Plewes adopting stage in some products, however, it was clear other products or projects were only at the awareness stage (stage 2) in that there were no UX processes. This includes the project I was assigned to when I joined as a consultant. There were scattered products receiving some UX attention — one off efforts being run by small UX teams focusing on addressing key issues brought up by major clients. There was some legacy of having UX in the past, however, after many years of UX work being done in various pockets of the organization, there was still no true UX process identifiable across the company, UX was not required for products or workstreams, and when budgets contracted, UX titles were some for the first to be eliminated.

The company was undergoing a complete backend technology transformation in order to move it’s many disparate entities onto the same technology platforms. When I became involved, I was brought in to see how to infuse UX into the process. I knew this was going to be challenging, as the ways of working had already been defined and the focus was on getting things quickly to production, with developers also doing the design based on requirements created by large groups of product owners and managers.

There was a huge appetite for the UX work, but much less appetite to incorporate the process into the already break-neck pace of the development underway. We worked to find ways to contribute to the current development efforts through testing, and found we were able to get a foothold into some of the key areas the effort was focusing on.

Specifically, what we did was take on a UX research and design project with a product owner who we’d identified as key to having as a champion during our preliminary interviews with stakeholders. This champion was ideal because they were highly motivated, well connected with people in powerful positions across the company, and perhaps most importantly, had a product that was key to the success of the endeavor and was in a position to immediately have us start conducting research that would lead to design.

I want to note here that the champion was not an executive level employee. They did not have the power to make people do things just because they told them to. This champion had all of the traits referenced in research on the role of an innovation champion:

  • Pursuing The Idea
    Our champion traveled, spent time in meetings and workshops, reached out to countless others, educated themself, and spent time outside of their typical duties in order to push for UX to grow in the organization.
  • Expressing Enthusiasm And Confidence About The Success Of The Innovation/Idea
    Our champion maintained a positive attitude and was able to readjust without giving at multiple points during our time there.
  • Persisting Under Adversity
    The general conditions on the ground were adverse to UX — with the focus on production. However, there were other mountains that were in the way that our champion needed to overcome. One specific example was that there was immediate and then constant pushback from colleagues on the ability for the product to incorporate research and redesign. This was relentless, however our champion did not let it stop them.
  • Getting The Right People Involved
    Our champion was well connected and knew how to get the right people involved. They had been in the organization for a decade and had a stellar reputation. For example, they knew the right executives and could get them to attend meetings to make a statement on the need for UX, when they were facing the adversity referenced in the bullet above.
  • Building Networks
    Our champion introduced us to key people, set up meetings between people across products and teams, and had the ability to get the right people to network without the need of being present in every meeting themself.
  • Taking Responsibility
    Our champion assigned and delegated tasks as needed, but they also took it upon themselves to review all work, spend time learning UX processes and value, and advocate for UX.

This case study highlights the power and importance of a UX champion in growing UX in an organization. Thanks to the presence of our champion, we used our foothold to gain the ear of key executives as well as many champions who were able to advocate a need to “walk the talk” on saying we were customer focused. This allowed UX to define some key processes and contribute to the broader group.

While our work there did not last beyond the end of this key workstream, when we left there had been an established library of reports, a defined process for UX to integrate with building technology, and a philosophy shift that not only did the words customer focused need to be stated, but the actions of customer-focused behavior needed to be reflected in what was being done.

Additionally, the champion had secured a new UX resource as a permanent hire for their product, they had a backlog of UX projects to complete, and had created a larger network of UX practitioners across the organization than had previously existed.

Tactic 2: Demonstrating The ROI/Value Of UX

As UX practitioners, we often focus on the value our work provides through the lens of a more satisfactory, efficient, or enjoyable experience. We take pride in meeting our users’ needs.

However, we work in settings where decisions are scrutinized based on their impact to the bottom line of profit and loss. We avoid reality if we don’t acknowledge the need to justify UX based on the return on investment a business or organization can expect. However, ROI can be more than a monetary calculation, with other metrics and key performance indicators useful for showing how UX impacts an organization or product.

Nielsen Norman Group notes ROI encourages buy-in, which is key for growing UX in organizations less familiar with the value UX work brings. NNG also states there are three myths that tend to prevent us from moving forward with calculating UX:

  • The ROI of UX is all about money;
  • The ROI of UX has to account for every detail.

You will need work to overcome these myths as they might exist within your organization as you start to measure UX ROI if you want to start increasing buy in for UX.

You can use a number of different metrics to show ROI, as NNG notes, it isn’t limited to money. Your product and industry might best dictate what metrics or key performance indicators tell the story of the ROI of improving UX. Yes, if you design for an e-commerce site, increasing conversion and sales will be a story you’d want to tell. But this tale might focus on additional metrics such as speed to completing a task, cart abandonment, or ratings on an app store or review platform.

I do believe many executives, across industries, are looking for the financial benefit of the decisions they make. We do need to present a business case for anything we propose that will cost money or resources such as time, training, and tools.

At face value return of investment is the increase in value or profit (return) an investment (in this case adding UX resources to a product) divided by cost (investment) in that resource (budget, UX software subscriptions, UX training, etc.). There isn’t a magic number, but you can assume you’d like the final number to be greater than 1, suggesting a positive return on the investment. You can potentially consider many items as part of what goes into the cost and return, depending on the product.

Anders Hoff provides a website ROI calculator. Human Factors International provides six different calculators depending on what you are trying to measure, from increased conversion to increased productivity, to reduced costs on formal training and reduced learning curve and more.

Moving beyond the specific monetary return requires deeper research and/or collecting analytical data. You will use these metrics to tailor your conversation on the need to grow UX to a specific audience that might. In other words, for some of these metrics you might benefit from being currently low or less than desirable, as they bolster your case for improving an experience to enhance the return.

Many product teams do collect analytics, even if they aren’t invested in UX, as this has become industry standard and easy to do. However, if you don’t know how to use these analytics, or haven’t had upfront conversations about what to collect, you’ll need to connect with the people in charge of collecting and reporting analytics to ensure the data you need will be available.

  • Finding information/navigating a site or application
    How long does it take a user to go through a typical workflow? Do they encounter errors? Do they drop before reaching a critical destination, but after starting down the path?
  • Ratings on app store or industry rating platforms
    How are users rating the current experience? What qualitative information are they providing to support their ratings? Does any of this tie back to UX or would any of it be addressed with improved UX.
  • Use/time spent
    Overall visits or time spent on an app or using your site. If you provide information or an experience that needs people to focus and pay attention this might be a number that is low and you think go up. However, if you are providing a way to apply for goods and services, or do something like pay a utility bill, you might want to focus on how time spent could be reduced as a good return for users.
  • Service/support calls and the frequent topic of calls
    How frequently does your support receive calls or emails related to usability issues, or issues that could be easily resolved with an improved UX? My experience has suggested confusing login credentials and inability to self service basic account issues online are frequent reasons people contact support. These are UX issues with a direct cost — and most companies know the cost of their support center calls. How much would you save by reducing these calls with better UX?

These are all examples of ways you can communicate ROI to your stakeholders, as part of a justification to grow UX in your organization. You need to determine what metric might speak clearest to the audience you are hoping to sway.

Case Study: Demonstrating ROI/Value Of UX At A Medical Insurance Provider

A large medical insurance provider had acquired a number of small providers over the past decade. Each of these separate companies had different systems their agents used. The company undertook and effort to shift all agents onto the same, new to everyone, platform.

The company planned the rollout in phases focusing on geographic regions. Initially, the company had no UX roles or processes, and they did not intend to account for any UX in their budget. Independent agents who were part of the first phase immediately stopped running policies through this provider. Exclusive agents flooded the call center with cries for help, needing to be walked through basic everyday tasks such as running quotes and binding policies. The provider pushed pause on subsequent releases while they determined how to best move forward.

I was brought in, along with my colleagues, to form a usability workstream on this project. However, we knew that budget was tight and we would need to show our value. We immediately engaged end users in a series of interviews and usability testing. From there, we made design recommendations, from small tweaks to major overhauls. Some of them were adopted, others were not considered feasible. The project moved on to release the usability fixes to the phase one agents, and into the subsequent phases of release.

The project leadership had to request any future budget for UX on the project from an executive committee. Project leadership knew what was meaningful to convince executives UX was making an impact, and therefore had a positive return on investment. We had a workshop with project leaders to determine key metrics. We landed on user satisfaction, calls to the call center requesting assistance, number of quotes run, and many other industry specific methods.

I need to note the importance of collecting benchmark metrics here. For example, We weren’t able to speak to the increase or decrease in the number of quotes run, because this metric wasn’t being purposefully tracked during phase one. However, we set a line in the sand and from that point forward we created a benchmark that could then be compared in future updates and releases.

Using a combination of user surveys, interviews, and data analytics, we were able to create the case that phase 1 users had the lowest satisfaction, but was trending upward, with the recipients of the UX improved phase 2 showing higher initial satisfaction, that UX was making an impact on reducing calls to the call center, and as noted we started purposefully documenting specific analytics. Project leadership presented these findings to the executive committee as part of their ask for continued funding — which was approved.

Fast forwarding a few years, UX remained onboard the project, with a budget for testing and revising designs prior to release, and was touted as a must have part of any future projects and digital products.

Conclusion

We all stand to benefit from increasing awareness and growing UX maturity in our organizations or on the product teams we work with. As practitioners, we are responsible for advocating UX to others.

I’ve presented two tactics that are especially potent in less mature UX organizations, however, they could be useful in any organization — especially larger ones where UX might be more robust on some products or projects (and almost unknown on others). The tactics highlight the need to choose the right people to be persuasive in your organization and use data in supporting our arguments for UX to play an expanded role.

The next article in this series will explore internal processes we can take to document and share UX work that has occurred, and mentorship needed to take UX maturity to higher levels. The final article will discuss education of both staff with UX roles and staff who do not have UX roles. Stay tuned!

Author Note: I want to thank my colleague Dana Daniels for assistance with background research on UX maturity models.

The post Growing UX Maturity: Finding A UX Champion And Demonstrating ROI (Part 1) appeared first on Blog.

]]>
https://www.creativewebsitestudios.com/blogs/growing-ux-maturity-finding-a-ux-champion-and-demonstrating-roi-part-1/feed/ 0
CSS Auditing Tools https://www.creativewebsitestudios.com/blogs/css-auditing-tools/ https://www.creativewebsitestudios.com/blogs/css-auditing-tools/#respond Thu, 11 Mar 2021 13:40:00 +0000 https://www.creativewebsitestudios.com/blogs/2021/03/11/css-auditing-tools/ How large is your CSS? How repetitive is it? What about your CSS specificity score? Can you safely remove some declarations and vendor prefixes, and if so, how do you...

The post CSS Auditing Tools appeared first on Blog.

]]>

How large is your CSS? How repetitive is it? What about your CSS specificity score? Can you safely remove some declarations and vendor prefixes, and if so, how do you spot them quickly? Over the last few weeks, we’ve been working on refactoring and cleaning up our CSS, and as a result, we stumbled upon a couple of useful tools that helped us identify duplicates. So let’s review some of them.

CSS Stats

CSS Stats runs a thorough audit of the CSS files requested on a page. Like many similar tools, it provides a dashboard-alike view of rules, selectors, declarations and properties, along with pseudo-classes and pseudo-elements. It also breaks down all styles into groups, from layout and structure to spacing, typography, font stacks and colors.

One of the useful features that CSS Stats provides is the CSS specificity score, showing how unnecessarily specific some of the selectors are. Lower scores and flatter curves are better for maintainability.

It also includes an overview of colors used, printed by declaration order, and a score for Total vs. Unique declarations, along with the comparison charts that can help you identify which properties might be the best candidates for creating abstractions. That’s a great start to understand where the main problems in your CSS lie, and what to focus on.

Yellow Lab Tools

Yellow Lab Tools, is a free tool for auditing web performance, but it also includes some very helpful helpers for measure the complexity of your CSS — and also provides actionable insights into how to resolve these issues.

The tool highlights duplicated selectors and properties, old IE fixes, old vendor prefixes and redundant selectors, along with complex selectors and syntax errors. Obviously, you can dive deep into each of the sections and study which selectors or rules specifically are overwritten or repeated. That’s a great option to discover some of the low-hanging fruits and resolve them quickly.

We can go a bit deeper though. Then you can head to your Browsers list configuration to double-check if you aren’t serving too many vendor prefixes, and test your configuration on Browsersl.ist or via Terminal.

Project Wallace

Unlike other tools, Project Wallace, created by Bart Veneman, additionally keeps the history of your CSS over time. You can use webbooks to automatically analyze CSS on every push in your CI. The tool tracks the state of your CSS over time by looking into specific CSS-related metrics such as average selector per rule, maximum selectors per rule and declarations per rule, along with a general overview of CSS complexity.

Parker

Katie Fenn’s Parker is a command-line stylesheet analysis tool that runs metrics on your stylesheets and reports on their complexity. It runs on Node.js, and, unlike CSS Stats, you can run it to measure your local files, e.g. as a part of your build process.

DevTools CSS Auditing

Of course, we can also use DevTools’ CSS overview panel. (You can enable it in the “Experimental Settings”). Once you capture a page, it provides an overview of media queries, colors and font declarations, but also highlights unused declarations which you can safely remove.

Also, CSS coverage returns an overview of unused CSS on a page. You could even go a bit further and bulk find unused CSS/JS with Puppeteer.

With “Code coverage” in place, going through a couple of scenarios that include a lot of tapping, tabbing and window resizing, we also export coverage data that DevTools collects as JSON (via the export/download icon). On top of that, you could use Puppeteer that also provides an API to collect coverage.

We’ve highlighted some of the details, and a few further DevTools tips in Chrome, Firefox, and Edge in Useful DevTools Tips And Shortcuts here on Smashing Magazine.

What Tools Are You Using?

Ideally, a CSS auditing tool would provide some insights about how heavily CSS implact rendering performance, and which operations lead to expensive layout recalculations. It could also highlight what properties don’t affect the rendering at all (like Firefox DevTools does it), and perhaps even suggest how to write slightly more efficient CSS selectors.

These are just a few tools that we’ve discovered — we’d love to hear your stories and your tools that work well to identify the bottlenecks and fix CSS issues faster. Please leave a comment and share your story in the comments!

You can also subscribe to our friendly email newsletter to not miss next posts like this one. And, of course, happy CSS auditing and debugging!

The post CSS Auditing Tools appeared first on Blog.

]]>
https://www.creativewebsitestudios.com/blogs/css-auditing-tools/feed/ 0
The Guide To Ethical Scraping Of Dynamic Websites With Node.js And Puppeteer https://www.creativewebsitestudios.com/blogs/the-guide-to-ethical-scraping-of-dynamic-websites-with-node-js-and-puppeteer/ https://www.creativewebsitestudios.com/blogs/the-guide-to-ethical-scraping-of-dynamic-websites-with-node-js-and-puppeteer/#respond Thu, 11 Mar 2021 12:30:00 +0000 https://www.creativewebsitestudios.com/blogs/2021/03/11/the-guide-to-ethical-scraping-of-dynamic-websites-with-node-js-and-puppeteer/ Let’s start with a little section on what web scraping actually means. All of us use web scraping in our everyday lives. It merely describes the process of extracting information...

The post The Guide To Ethical Scraping Of Dynamic Websites With Node.js And Puppeteer appeared first on Blog.

]]>
Let’s start with a little section on what web scraping actually means. All of us use web scraping in our everyday lives. It merely describes the process of extracting information from a website. Hence, if you copy and paste a recipe of your favorite noodle dish from the internet to your personal notebook, you are performing web scraping.

When using this term in the software industry, we usually refer to the automation of this manual task by using a piece of software. Sticking to our previous “noodle dish” example, this process usually involves two steps:

  • Fetching the page
    We first have to download the page as a whole. This step is like opening the page in your web browser when scraping manually.
  • Parsing the data
    Now, we have to extract the recipe in the HTML of the website and convert it to a machine-readable format like JSON or XML.

In the past, I have worked for many companies as a data consultant. I was amazed to see how many data extractions, aggregation, and enrichment tasks are still done manually although they easily could be automated with just a few lines of code. That is exactly what web scraping is all about for me: extracting and normalizing valuable pieces of information from a website to fuel another value-driving business process.

During this time, I saw companies use web scraping for all sorts of use cases. Investment firms were primarily focused on gathering alternative data, like product reviews, price information, or social media posts to underpin their financial investments.

Here’s one example. A client approached me to scrape product review data for an extensive list of products from several e-commerce websites, including the rating, location of the reviewer, and the review text for each submitted review. The result data enabled the client to identify trends about the product’s popularity in different markets. This is an excellent example of how a seemingly “useless” single piece of information can become valuable when compared to a larger quantity.

Other companies accelerate their sales process by using web scraping for lead generation. This process usually involves extracting contact information like the phone number, email address, and contact name for a given list of websites. Automating this task gives sales teams more time for approaching the prospects. Hence, the efficiency of the sales process increases.

Stick To The Rules

In general, web scraping publicly available data is legal, as confirmed by the jurisdiction of the Linkedin vs. HiQ case. This includes:

  • Checking the robots.txt file.
    It usually contains clear information about which parts of the site the page owner is fine to be accessed by robots & scrapers and highlights the sections that should not be accessed.
  • Reading the terms and conditions.
    Compared to the robots.txt, this piece of information is not available less often, but usually states how they treat data scrapers.
  • Scraping with moderate speed.
    Scraping creates server load on the infrastructure of the target site. Depending on what you scrape and at which level of concurrency your scraper is operating, the traffic can cause problems for the target site’s server infrastructure. Of course, the server capacity plays a big role in this equation. Hence, the speed of my scraper is always a balance between the amount of data that I aim to scrape and the popularity of the target site. Finding this balance can be achieved by answering a single question: “Is the planned speed going to significantly change the site’s organic traffic?”. In cases where I am unsure about the amount of natural traffic of a site, I use tools like ahrefs to get a rough idea.

Selecting The Right Technology

In fact, scraping with a headless browser is one of the least performant technologies you can use, as it heavily impacts your infrastructure. One core from your machine’s processor can approximately handle one Chrome instance.

Let’s do a quick example calculation to see what this means for a real-world web scraping project.

Scenario

  • You want to scrape 20,000 URLs.
  • The average response time from the target site is 6 seconds.
  • Your server has 2 CPU cores.

The project will take 16 hours to complete.

Hence, I always try to avoid using a browser when conducting a scraping feasibility test for a dynamic website.

Here is a small checklist that I always go through:

  • Can I force the required page state through GET-parameters in the URL? If yes, we can simply run an HTTP-request with the appended parameters.
  • Are the dynamic information part of the page source and available through a JavaScript object somewhere in the DOM? If yes, we can again use a normal HTTP-request and parse the data from the stringified object.
  • A lot of times, the response is even formatted in JSON, which makes our life much easier.

If all questions are answered with a definite “No”, we officially run out of feasible options for using an HTTP-client. Of course, there might be more site-specific tweaks that we could try, but usually, the required time to figure them out is too high, compared to the slower performance of a headless browser. The beauty of scraping with a browser is that you can scrape anything that is subject to the following basic rule:

If you can access it with a browser, you can scrape it.

Let’s take the following site as an example for our scraper: https://quotes.toscrape.com/search.aspx. It features quotes from a list of given authors for a list of topics. All data is fetched via XHR.

Whoever took a close look at the site’s functioning and went through the checklist above probably realized that the quotes could actually be scraped using an HTTP client, as they can be retrieved by making a POST-request on the quotes endpoint directly. But since this tutorial is supposed to cover how to scrape a website using Puppeteer, we will pretend this was impossible.

Installing Prerequisites

Since we are going to build everything using Node.js, let’s first create and open a new folder, and create a new Node project inside, running the following command:

mkdir js-webscraper
cd js-webscraper
npm init

Please make sure you have already installed npm. The installer will ask us a few questions about meta-information about this project, which we can all skip, hitting Enter.

Installing Puppeteer

We have been talking about scraping with a browser before. Puppeteer is a Node.js API that allows us to talk to a headless Chrome instance programmatically.

Let’s install it using npm:

npm install puppeteer

Building Our Scraper

Now, let’s start to build our scraper by creating a new file, called scraper.js.

First, we import the previously installed library, Puppeteer:

const puppeteer = require('puppeteer');

As a next step, we tell Puppeteer to open up a new browser instance inside an asynchronous and self-executing function:

(async function scrape() {
  const browser = await puppeteer.launch({ headless: false });
  // scraping logic comes here…
})();

Note: By default, the headless mode is switched off, as this increases performance. However, when building a new scraper, I like to turn off the headless mode. This allows us to follow the process the browser is going through and see all rendered content. This will help us debug our script later on.

Inside our opened browser instance, we now open a new page and direct towards our target URL:

const page = await browser.newPage();
await page.goto('https://quotes.toscrape.com/search.aspx');

As part of the asynchronous function, we will use the await statement to wait for the following command to be executed before proceeding with the next line of code.

Now that we have successfully opened a browser window and navigated to the page, we have to create the website’s state, so the desired pieces of information become visible for scraping.

Hence, we will first select ‘Albert Einstein’ and wait for the generated list of topics. We then click on submit and extract the retrieved quotes from the container that is holding the results.

As we will now convert this into JavaScript logic, let’s first make a list of all element selectors that we have talked about in the previous paragraph:

Author select field #author
Tag select field #tag
Submit button input[type="submit"]
Quote container .quote

Before we start interacting with the page, we will ensure that all elements that we will access are visible, by adding the following lines to our script:

await page.waitForSelector('#author');
await page.waitForSelector('#tag');

Next, we will select values for our two select fields:

await page.select('select#author', 'Albert Einstein');
await page.select('select#tag', 'learning');

We are now ready to conduct our search by hitting the “Search” button on the page and wait for the quotes to appear:

await page.click('.btn');
await page.waitForSelector('.quote');

Since we are now going to access the HTML DOM-structure of the page, we are calling the provided page.evaluate() function, selecting the container that is holding the quotes (it is only one in this case). We then build an object and define null as the fallback-value for each object parameter:

let quotes = await page.evaluate(() => {
        let quotesElement = document.body.querySelectorAll('.quote');
  let quotes = Object.values(quotesElement).map(x => {
              return {
                  author: x.querySelector('.author').textContent ?? null,
    quote: x.querySelector('.content').textContent ?? null,
    tag: x.querySelector('.tag').textContent ?? null,
  };
});
 return quotes;
});

We can make all results visible in our console by logging them:

console.log(quotes);

Finally, let’s close our browser and add a catch statement:

await browser.close();

The complete scraper looks like the following:

const puppeteer = require('puppeteer');

(async function scrape() {
    const browser = await puppeteer.launch({ headless: false });

    const page = await browser.newPage();
    await page.goto('https://quotes.toscrape.com/search.aspx');

    await page.waitForSelector('#author');
    await page.select('#author', 'Albert Einstein');

    await page.waitForSelector('#tag');
    await page.select('#tag', 'learning');

    await page.click('.btn');
    await page.waitForSelector('.quote');

    // extracting information from code
    let quotes = await page.evaluate(() => {

        let quotesElement = document.body.querySelectorAll('.quote');
        let quotes = Object.values(quotesElement).map(x => {
            return {
                author: x.querySelector('.author').textContent ?? null,
                quote: x.querySelector('.content').textContent ?? null,
                tag: x.querySelector('.tag').textContent ?? null,

            }
        });

        return quotes;

    });

    // logging results
    console.log(quotes);
    await browser.close();

})();

Let’s try to run our scraper with:

node scraper.js

And there we go! The scraper returns our quote object just as expected:

Advanced Optimizations

Our basic scraper is now working. Let’s add some improvements to prepare it for some more serious scraping tasks.

Setting A User-Agent

By default, Puppeteer uses a user-agent that contains the string HeadlessChrome. Quite a few websites look out for this sort of signature and block incoming requests with a signature like that one. To avoid that from being a potential reason for the scraper to fail, I always set a custom user-agent by adding the following line to our code:

await page.setUserAgent('Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/88.0.4298.0 Safari/537.36');

This could be improved even further by choosing a random user-agent with each request from an array of the top 5 most common user-agents. A list of the most common user-agents can be found in a piece on Most Common User-Agents.

Implementing A Proxy

Puppeteer makes connecting to a proxy very easy, as the proxy address can be passed to Puppeteer on launch, like this:

const browser = await puppeteer.launch({
  headless: false,
  args: [ '--proxy-server=<PROXY-ADDRESS>' ]
});

sslproxies provides a large list of free proxies that you can use. Alternatively, rotating proxy services can be used. As proxies are usually shared between many customers (or free users in this case), the connection becomes much more unreliable than it already is under normal circumstances. This is the perfect moment to talk about error handling and retry-management.

Error And Retry-Management

A lot of factors can cause your scraper to fail. Hence, it is important to handle errors and decide what should happen in case of a failure. Since we have connected our scraper to a proxy and expect the connection to be unstable (especially because we are using free proxies), we want to retry four times before giving up.

Also, there is no point in retrying a request with the same IP address again if it has previously failed. Hence, we are going to build a small proxy rotating system.

First of all, we create two new variables:

let retry = 0;
let maxRetries = 5;

Each time we are running our function scrape(), we will increase our retry variable by 1. We then wrap our complete scraping logic with a try and catch statement so we can handle errors. The retry-management happens inside our catch function:

The previous browser instance will be closed, and if our retry variable is smaller than our maxRetries variable, the scrape function is called recursively.

Our scraper will now look like this:

const browser = await puppeteer.launch({
  headless: false,
  args: ['--proxy-server=' + proxy]
});
try {
  const page = await browser.newPage();
  … // our scraping logic
} catch(e) {
  console.log(e);
  await browser.close();
  if (retry < maxRetries) {
    scrape();
  }
};

Now, let us add the previously mentioned proxy rotator.

Let’s first create an array containing a list of proxies:

let proxyList = [
  '202.131.234.142:39330',
  '45.235.216.112:8080',
  '129.146.249.135:80',
  '148.251.20.79'
];

Now, pick a random value from the array:

var proxy = proxyList[Math.floor(Math.random() * proxyList.length)];

We can now run the dynamically generated proxy together with our Puppeteer instance:

const browser = await puppeteer.launch({
  headless: false,
  args: ['--proxy-server=' + proxy]
});

Of course, this proxy rotator could be further optimized to flag dead proxies, and so on, but this would definitely go beyond the scope of this tutorial.

This is the code of our scraper (including all improvements):

const puppeteer = require('puppeteer');

// starting Puppeteer

let retry = 0;
let maxRetries = 5;

(async function scrape() {
    retry++;

    let proxyList = [
        '202.131.234.142:39330',
        '45.235.216.112:8080',
        '129.146.249.135:80',
        '148.251.20.79'
    ];

    var proxy = proxyList[Math.floor(Math.random() * proxyList.length)];

    console.log('proxy: ' + proxy);

    const browser = await puppeteer.launch({
        headless: false,
        args: ['--proxy-server=' + proxy]
    });

    try {
        const page = await browser.newPage();
        await page.setUserAgent('Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/88.0.4298.0 Safari/537.36');

        await page.goto('https://quotes.toscrape.com/search.aspx');

        await page.waitForSelector('select#author');
        await page.select('select#author', 'Albert Einstein');

        await page.waitForSelector('#tag');
        await page.select('select#tag', 'learning');

        await page.click('.btn');
        await page.waitForSelector('.quote');

        // extracting information from code
        let quotes = await page.evaluate(() => {

            let quotesElement = document.body.querySelectorAll('.quote');
            let quotes = Object.values(quotesElement).map(x => {
                return {
                    author: x.querySelector('.author').textContent ?? null,
                    quote: x.querySelector('.content').textContent ?? null,
                    tag: x.querySelector('.tag').textContent ?? null,

                }
            });

            return quotes;

        });

        console.log(quotes);

        await browser.close();
    } catch (e) {

        await browser.close();

        if (retry < maxRetries) {
            scrape();
        }
    }
})();

Voilà! Running our scraper inside our terminal will return the quotes.

Playwright As An Alternative To Puppeteer

At the beginning of 2020, Microsoft released an alternative called Playwright. Microsoft headhunted a lot of engineers from the Puppeteer-Team. Besides being the new kid on the blog, Playwright’s biggest differentiating point is the cross-browser support, as it supports Chromium, Firefox, and WebKit (Safari).

Performance tests (like this one conducted by Checkly) show that Puppeteer generally provides about 30% better performance, compared to Playwright, which matches my own experience — at least at the time of writing.

Resources And Additional Links

The post The Guide To Ethical Scraping Of Dynamic Websites With Node.js And Puppeteer appeared first on Blog.

]]>
https://www.creativewebsitestudios.com/blogs/the-guide-to-ethical-scraping-of-dynamic-websites-with-node-js-and-puppeteer/feed/ 0
New Live Workshops On Front-End & UX https://www.creativewebsitestudios.com/blogs/new-live-workshops-on-front-end-ux/ https://www.creativewebsitestudios.com/blogs/new-live-workshops-on-front-end-ux/#respond Wed, 10 Mar 2021 11:30:00 +0000 https://www.creativewebsitestudios.com/blogs/2021/03/10/new-live-workshops-on-front-end-ux/ There is something magical about people from all over the world coming together, live. Camera on, mic nearby, in a comfy chair, with fingertips eagerly hitting your beloved keyboard. We’ve...

The post New Live Workshops On Front-End & UX appeared first on Blog.

]]>

There is something magical about people from all over the world coming together, live. Camera on, mic nearby, in a comfy chair, with fingertips eagerly hitting your beloved keyboard. We’ve been so humbled to welcome over 2000 wonderful people like you in our workshops already — from Montevideo to Delhi; from Perth to Cape Town; from Austin to remote corners of Lapland.


Meet Smashing Online Workshops: live, interactive sessions on front-end & UX.

Every attendee has their own story and experiences to share, all from the comfort of their home, and the convenience of their working space. And so we’ve just announced new dates and speakers for upcoming months. And we thought, you know, maybe you’d like to join in as well.

Just in case you are wondering: here’s what the workshops are like.

Upcoming Workshops in March–July

No pre-recorded sessions, no big picture talks. Our online workshops take place live and span multiple days across weeks. They are split into 2.5h-sessions, plus you’ll get all workshop video recordings, slides and a friendly Q&A in every session. (Ah, you can save up to 25% off with a Smashing Membershipjust sayin’!.)

Workshops in March–April


Meet our friendly front-end & UX workshops. Boost your skills online and learn from experts — live.

Workshops in May–July

What Are Online Workshops Like?

Do you experience Zoom fatigue as well? After all, who really wants to spend more time in front of their screen? That’s exactly why we’ve designed the online workshop experience from scratch, accounting for the time needed to take in all the content, understand it and have enough time to ask just the right questions.


In our workshops, everybody is just a slightly blurry rectangle on the screen; everybody is equal, and invited to participate.

Our online workshops take place live and span multiple days across weeks. They are split into 2.5h-sessions, and in every session there is always enough time to bring up your questions or just get a cup of tea. We don’t rush through the content, but instead try to create a welcoming, friendly and inclusive environment for everyone to have time to think, discuss and get feedback.

There are plenty of things to expect from a Smashing workshop, but the most important one is focus on practical examples and techniques. The workshops aren’t talks; they are interactive, with live conversations with attendees, sometimes with challenges, homework and team work.

Of course, you get all workshop materials and video recordings as well, so if you miss a session you can re-watch it the same day.

TL;DR

  • Workshops span multiple days, split in 2.5h-sessions.
  • Enough time for live Q&A every day.
  • Dozens of practical examples and techniques.
  • You’ll get all workshop materials & recordings.
  • All workshops are focused on front-end & UX.
  • Get a workshop bundle and save $250 off the price.

Thank You!

We hope that the insights from the workshops will help you improve your skills and the quality of your work. A sincere thank you for your kind, ongoing support and generosity — for being smashing, now and ever. We’d be honored to welcome you.

The post New Live Workshops On Front-End & UX appeared first on Blog.

]]>
https://www.creativewebsitestudios.com/blogs/new-live-workshops-on-front-end-ux/feed/ 0
Modeling A GraphQL API For Your Blog Using Webiny Serverless CMS https://www.creativewebsitestudios.com/blogs/modeling-a-graphql-api-for-your-blog-using-webiny-serverless-cms/ https://www.creativewebsitestudios.com/blogs/modeling-a-graphql-api-for-your-blog-using-webiny-serverless-cms/#respond Tue, 09 Mar 2021 14:00:00 +0000 https://www.creativewebsitestudios.com/blogs/2021/03/09/modeling-a-graphql-api-for-your-blog-using-webiny-serverless-cms/ In time past, developers reduced the challenges associated with managing content-dependent platforms through the use of Content Management Systems (CMS) which allowed web content to be created and displayed using...

The post Modeling A GraphQL API For Your Blog Using Webiny Serverless CMS appeared first on Blog.

]]>
In time past, developers reduced the challenges associated with managing content-dependent platforms through the use of Content Management Systems (CMS) which allowed web content to be created and displayed using existing design templates provided by the CMS service.

But with the arrival of Single Page Applications (SPAs), this approach to managing content has become unfavorable as developers are lock-in with the provide design layouts. This is the point where the use of Headless CMS services has been largely embraced as developers have sought more freedom to serve content across various clients such as mobile, web, desktop, and even wearable devices.

A headless CMS stores data in a backend database however unlike the traditional CMS service where content is display through a defined template, content is deliver via an API and this gives developers the flexibility to consume content across various clients or frontend frameworks.

One example of such a headless CMS is Webiny. Its server less headless CMS which provides a personalized admin application to create content, and a robust GraphQL API to consume whatever content was created through the admin application. Further down this article, we will explore Webiny and use the admin app when modeling content through the headless CMS app, then consume the content via the GraphQL API in a Gatsby blog application.

If this is your first time hearing of Webiny, it’s an open-source framework for building server less applications which provide users with tools and ready-made applications. It has a growing developer community on Slack, ultimately trying to make the development of server less applications easy and straightforward.

To make this article easy to follow, it has been broken down into two major segments. You can either skip to the part that interests you most, or follow them in the order as they appear below:

Note: To follow along, you’ll need to have an AWS account (if not, please create one), Yarn, or have npm installed on your local machine. A good understanding of React.js is beneficial as the demo application is built by using Gatsby.

Creating And Deploying A Webiny Project

To get start, we’re going to create a new Webiny project, deploy it and use the Headless CMS through the generat admin app to begin modeling content within the GraphQL API.

Running the command below from a terminal will generate a new Webiny project based on your answers to the installation prompts:

npx create-webiny-project@beta webiny-blog-backend --tag beta

The command above would run all steps needed for bootstrapping a Webiny project. A Webiny project consists of three smaller applications: a GraphQL API, an admin app, and also a website — all of which are contain in the root generat Webiny project folder similar to the one in the image below.

Next, we need to start the deployment of the three components within the Webiny project to AWS so we can access the GraphQL API. The Cloud Infrastructure section of the Webiny documentation gives a detailed explanation of entire the infrastructure deployed to AWS.

Run the command below from your terminal to begin this deployment which would last for few minutes:

yarn webiny deploy

After a successful deployment of all three apps, the URL to the Admin App, GraphQL API endpoint and website would be printed out in the terminal. You can save them in an editor for later use.

Note: The command above deploys the three generated applications collectively. Please visit this part of the Webiny documentation on instructions on how to deploy the applications individually.

Next, we will be setting up the Headless CMS using the admin application generated for managing your Webiny project.

Webiny Admin App

As part of the first time installation process when you access your admin app, you would be prompt to create a default user with your details, and a password to secure your admin app, after which you proceed through the installation prompts for the Headless CMS, Page Builder and Form Builder.

From the Admin welcome page shown above, navigate to the Content Models page by clicking on the New Content Model button within the Headless CMS card. Being a new project, the Content Models list would be empty, we move on next to create our first Content Model.

For our use-case, each content model would represent a blog post, this means each time we want to create a blog post we would create a content model and the data would be save into the database and add to GraphQL API.

Clicking the lemon floating action button would display the modal with the fields for creating a new Content Model as shown in the image below.

After creating the content model from the image above, we can open the newly saved content model to begin adding fields containing data about the blog post into the content model.

The Webiny content model page has an easy-to-use drag ‘n’ drop editor which supports dragging fields from the left side and dropping them into the editor on the right side of the page. These fields are of eight categories, each use to hold a specific type of value.

Before we begin adding the fields for the content model, below is a layout of the items we want to be contained in the blog post.

Note: While we do not have to insert the elements in the exact order above, however adding fields is much easier when we have a mental picture of the content model structure.

Add the following items with their appropriate fields into the content editor to create the model structure above.

1. Article Title Item

Starting with the first item in the Article Title, we drag ‘n’ drop the TEXT field into the editor. The TEXT field is appropriate for a title as it was created for short texts or single-line values.

Add the Label, Helper Text and Placeholder Text input values into the Field settings modal as shown below.

2. Date Item

Next for the Date, we drag ‘n’ drop the DATE field into the editor. DATE fields have an extra date format with options of either date only, time only, date time with timezone, or date time without a given timezone. For our use-case, we will select the date time alongside the timezone option as we want readers to see when the post was create in their current timezone.

3. Article Summary

For the Article summary item, we would drag the LONG TEXT field into the editor and fill in the Label, Helper Text and Placeholder Text inputs in the field settings. The LONG TEXT field is used to store multi-line text values and this makes it ideal as the article summary would have several lines summarizing the blog post.

We would use the LONG TEXT field to create the First Paragraph and Concluding Paragraph items since they all contain a lengthy amount of text values.

4. Sample Image

The FILES field is used for adding files and object data into the content model. For our use-case, we would add images into the content model using the FILES field. Drag ‘n’ Drop the FILES field into the editor for adding images.

After adding all the fields above, click the Preview tab to show the fields input elements added into the content model then fill in the values of these input fields.

From the Preview Tab above, we can see a preview of all model fields dropped into the drag ‘n’ editor for creating a blog post using the content model. Add the respective values into each of the input fields above then click on the Save button at the bottom.

After saving, we can view these input values by querying the GraphQL API using the GraphQL playground. Navigate to the API Information page using the sidebar, to access the GraphQL playground for your project.

Using GraphQL editor, you can inspect the entire GraphQL API structure using the schema introspection feature from the Docs.

We can also create and test GraphQL queries and mutations on our content models using the GraphQL Playground before using them from a client-side application.

Within the image above we used the getContentModel query from our generated GraphQL API to query our Webiny database for data about the last content model we created. To get this exact model we had to pass in the modelID of the new model as an argument into the getContentModel query.

At this point, we have set up our Webiny project and modeled our GraphQL API using the generated Webiny Admin application. The following steps below describe how to consume your GraphQL API within a Gatsby Application.

Generate An API Access Key

All requests made to your Webiny GraphQL API endpoint must contain a valid token within its request headers for authentication.

From the side menu, click the API Keys item within the Security dropdown to navigate to the API Keys page where you create and manage your API Keys for your GraphQL API.

Using the right placed form, we give the new key a name and a description, then we select the All locales radio button option within the Content dropdown. Lastly, within the Headless CMS dropdown, we select the Full Access option from the Access Level dropdown to give this key full access to data within the Headless CMS app of our Admin project.

Note: When granting app access permission to your API keys, Webiny provides a **Custom Access option within the *Access Level** dropdown to streamline what the API key can be used for within the selected application.

After saving the new API Key, a token key would be generat to be used when accessing the API Key. From the image below you can see an example of a token generated for my application within the highlighted box.

Take note of this token key as we would use it next from our Gatsby Web Application.

Setting A Gatsby Single Page Application

Execute the command below to start the installer for creating a new Gatsby project on your local machine using NPM and select your project preference from the installation prompts.

npm init gatsby

Next, run this command to install the following needed dependencies into your Gatsby project;

yarn add gatsby-source-graphql styled-components react-icons moment

To use GraphQL within our Gatsby project, open the gatsby-config.js and modify it to have the same content with the codes in the code block below;

// gatsby-config.js

module.exports = {
    siteMetadata: {
        title: "My Blog Powered by Webiny CMS",
    },
    plugins: [
        "gatsby-plugin-styled-components",
        "gatsby-plugin-react-helmet",
        gatsby-plugin-styled-components, { resolve: gatsby-source-filesystem, options: { name: images, path: ${__dirname}/src/images, }, }, { resolve: "gatsby-source-graphql", options: { // Arbitrary name for the remote schema Query type typeName: "blogs", // Field for remote schema. You'll use this in your Gatsby query fieldName: "posts", url: process.env.GATSBY_APP_WEBINY_GRAPHQL_ENDPOINT, headers : { Authorization : process.env.GATSBY_APP_WEBINY_GRAPHQL_TOKEN } }, }, ], }; 

Above we are adding an external GraphQL API to Gatsby’s internal GraphQL API using the gatsby-source-graphql plugin. As an extra option, we added the GraphQL endpoint URL and access token value into the request headers from our Gatsby environment variables.

Note: Run the yarn Webiny info command from a terminal launched within the Webiny project to print out the GraphQL API endpoint used in the url field of the gatsby-config.js file above.

When next we start the Gatsby application, our GraphQL schema and data would be merged into Gatsby’s default generated schema which we can introspect using Gatsby’s GraphiQL Playground to see the fields similar to the those in the image below at http://localhost:8000/___graphql.

Note: A new test content model was later created to demonstrate multiple content models being returned from the listContentModels query.

To query and display this data within the Gatsby application, create a new file ( posts.js ) containing the following React component:

import React from "react"
import {FiCalendar} from "react-icons/fi"
import {graphql, useStaticQuery, Link} from "gatsby";
import Moment from "moment"

import {PostsContainer, Post, Title, Text, Button, Hover, HoverIcon} from "../styles"
import Header from "../components/header"
import Footer from "../components/footer"

const Posts = () => {
    const data = useStaticQuery(graphqlquery fetchAllModels {
            posts {
                listContentModels {
                    data {
                        name
                        description
                        createdOn
                        modelId
                    }
                }
            }
        }) return ( <div> <Header title={"Home || Blog"}/> <div style={{display: "flex", justifyContent: "center",}}> <PostsContainer> <div> <Title align={"center"} bold> A collection of my ideas</Title> <Text align={"center"} color={"grey"}> A small space to document my thoughts in form of blog posts and articles </Text> </div> <br/> { data.posts.listContentModels.data.map(({id, name, description, createdOn, modelId}) => ( <Post key={id}> <div style={{display: "flex"}}> <HoverIcon> <FiCalendar/> </HoverIcon> <div> <Text small style={{marginTop: "2px"}}> {Moment(createdOn).format("dddd, m, yyyy")} </Text> </div> </div> <br/> <Title bold align={"center"}> {name} </Title> <br/> <Text align={"center"}> {description} </Text> <br/> <div style={{textAlign: "right"}}> <Link to={/${modelId}} state={{modelId}}> <Button onClick={_ => { }}> Continue Reading </Button> </Link> </div> </Post> )) } <br/> </PostsContainer> </div> <Footer/> </div> ) } export default Posts

From the code block above, we are making a query using the useStaticQuery hook from Gatsby and we use the returned data to populate the posts within the component styled using styled-components.

Taking a closer look at the Continue Reading button in the code block above, we can see it is wrapped with a link that points to a page’s name of the modelId currently being iterated over. This page would be create dynamically from a template each time the Gatsby application is start.

To implement this creation of dynamic pages, create a new file (gatsby-node.js) with the following code.

# gatsby-node.js
const path = require("path")

exports.createPages = async ({graphql, actions, reporter}) => {
    const {createPage} = actions

    const result = await graphql(query getContent {
    posts {
    listContentModels {
      data {
        description
        createdOn
        modelId
        name
      }
    }
  }
}) // Template to create dynamic pages from. const blogPostTemplate = path.resolve(src/pages/post.js) result.data.posts.listContentModels.data.forEach(({description, modelId, createdOn, name}) => { createPage({ path: modelId, component: blogPostTemplate, // data to pass into the dynamic template context: { name, description, modelId, createdOn }, }) }) }

First, we make a GraphQL query to fetch all models created on Webiny which returns an array with the contained fields, then we iterate over the result each time using the createPage API from Gatsby to create a new page dynamically using the component in ./pages/post.js as a template.

At this point, the template component is non-existent. Create a new file (post.js) with the code below to create the template.

# ./pages/post.js

import React from "react"
import Moment from "moment"

import Header from "../components/header"
import Footer from "../components/footer"
import {PostContainer, Text, Title} from "../styles";
import Layout from "../components/layout";

const Post = ({ pageContext }) => {
    const { name, description , createdOn} = pageContext

    return (
        <Layout>
            <Header title={name}/>
            <br/>

            <div style={{display: "flex", justifyContent: "center"}}>
                <PostContainer>
                    <Title align={"center"}> {name} </Title>
                    <Text color={"grey"} align={"center"}>
                      Created On {Moment(createdOn).format("dddd, mm, yyyy")}
                    </Text>
                    <br/>
                    <Text> {description} </Text>
                </PostContainer>
            </div>

            <br/>

            <Footer/>
        </Layout>
    )
}

export default Post

This component receives a pageContext object each time it is used as a template, the fields within the object are further de-structured and used to populate the data shown on the page, same as the example shown below.

Conclusion

Within this article we have had a detailed look into what Webiny is, the serverless features it provides, and also how the Headless CMS can be used with a Static Website Generator such as Gatsby as a source of data.

As explained earlier, there are more serverless services which Webiny provides apart from the Headless CMS, such as the No-code Form Builder for building interactive forms, Page Builder, and even a File Manager for use within your applications.

You can join the Webiny community on Slack or contribute to the Open Source Webiny project on Github.

References

The post Modeling A GraphQL API For Your Blog Using Webiny Serverless CMS appeared first on Blog.

]]>
https://www.creativewebsitestudios.com/blogs/modeling-a-graphql-api-for-your-blog-using-webiny-serverless-cms/feed/ 0