Drupal News

Drupal Association blog: DrupalCon Nashville and Tennessee’s Discrimination standing

Main Drupal Feed - Thu, 01/25/2018 - 15:51

As many already know, DrupalCon North America 2018 will be held in Nashville, TN. The Drupal Association puts a lot of time and effort into choosing a site for DrupalCon North America - a two to three year process that involves request for proposals, several rounds of interviews, site visits and contract negotiations. We do not take this lightly and we include both logistically important and socially relevant questions for review.

Unfortunately, sometimes things happen outside of our control, despite our great lengths of planning. In April 2016, after a 5-month RFP and interview process, we signed a contract with the City of Nashville to host DrupalCon North America 2018. A few weeks later, the State of Tennessee introduced and passed a new law that Drupal Association does not support, and as many community members have pointed out - prevents public employees from the State of California from attending DrupalCon if sponsored by their employer.

For those who have asked, the timeline of events transpired as follows:

  • April 2016: Drupal Association contracted with Nashville, TN to host DrupalCon North America 2018
  • Early May 2016: Tennessee enacted the Amendment Senate Bill No. 1556 House Bill No. 1840
  • January 2017: California enacted restrictions banning state sponsored travel to TN in response to SB1556/HB1840.

Specifically, May 2, 2016. SB1556/HB1840 as enacted, declares that no person providing counseling or therapy services will be required to counsel or serve a client as to goals, outcomes, or behaviors that conflict with the sincerely held principles of the counselor or therapist; requires such counselor or therapist to refer the client to another counselor or therapist; creates immunity for such action; maintains liability for counselors who will not counsel a client based on the counselor's religious beliefs when the individual seeking or undergoing the counseling is in imminent danger of harming themselves or others.

It is unfortunate that this bill became law. The Nashville Convention & Visitors Corporation, who we worked with to contract DrupalCon Nashville, and the greater Nashville business community including the Nashville Mayor’s office believe discrimination has no place in their home state.

In response to this bill and in anticipation of other potential discrimination bills in the future, Nashville Convention & Visitors Corporation became a founding and leading member of Tennessee Thrives, a business coalition of now more than 400 companies across Tennessee who believe that in order for Tennessee businesses and communities to thrive they must be diverse and welcoming for all people, regardless of race, sex, national origin, ethnicity, religion, age, disability, sexual orientation or gender identity. You can read more here about Tennessee Thrives and the Nashville Metro area’s history of social advancements, as well as a statement from the Nashville Convention and Visitors Corporation.

Here is the Tennessee Thrives pledge:

We believe that equal treatment of all Tennesseans and visitors is essential to maintaining Tennessee’s strong brand as a growing and exciting home for business innovation, economic development, a best-in-class workforce, and dynamic entertainment, travel and tourism industries.

In order for Tennessee businesses to compete for top talent, we believe our workplaces and communities must be diverse and welcoming for all people, regardless of race, sex, national origin, ethnicity, religion, age, disability, sexual orientation or gender identity.

As signers of the Tennessee Thrives pledge, we are committed to promoting an attractive, prosperous, and economically vibrant Tennessee. A united Tennessee is a thriving Tennessee.

Tennessee Thrives identified 12 discriminatory bills that were filed in the General Assembly in 2017, and with their efforts only two were approved.

As a further measure of welcome for our Drupal community, the Mayor of Nashville, has extended a Statement of Welcome to the DrupalCon community. They are very excited that DrupalCon has chosen Nashville as their 2018 North American location, and hope we can see past the politics of the larger state to see the welcoming intent of the City of Nashville.

In response to the Drupal community concerns with Nashville as a DrupalCon city, the Nashville Convention & Visitors Corporation offered this statement:

Nashville is an open, welcoming city that respects and embraces the differences among us. We believe that our differences make our community stronger. A sampling of Nashville’s social advancements in contradiction to the actions of TN legislature include:

  • In 2016, the Metro Nashville Council unanimously voted to approve a resolution asking the state legislature to oppose bills opposing the U.S. Supreme Court’s decision on marriage equality. The resolution’s lead co-sponsor was Councilwoman Nancy Van Reese, who is openly gay.
  • On March 21, 2016, Mayor Megan Barry issued an executive order requiring training of all employees of the Metropolitan Government in diversity issues and sexual harassment awareness and prevention.
  • In May, 2016, Nashville hosted the International Gay Rugby Bingham Cup. Mayor Megan Barry served on the Host Committee to bring the Bingham Cup to Nashville.
  • While a mayoral candidate, Mayor Megan Barry officiated the first same-sex marriage in Nashville just hours after the Supreme Court ruled that same-sex marriage is allowed in all 50 states. (During her inauguration in September, 2015, Mayor Barry invited Nashville in Harmony to perform. The group is Tennessee’s first and only musical arts organization specifically created for gay, lesbian, bisexual, and transgender people – and their straight allies. The group performed at events hosted by the previous Nashville Mayor, as well.)
  • While a mayoral candidate, Mayor Megan Barry received the Ally Award from the Nashville LGBT Chamber of Commerce in 2015.
  • In 2011, Nashville extended nondiscrimination protections to employees of the city and contractors.  (Unfortunately, state government nullified the local decision.)
  • In 2009, the Metro Nashville Council passed an ordinance that protects Metro employees from discrimination based on their sexual orientation or gender identity. (Sponsored by then Council Member-At-Large Megan Barry, who now serves as Mayor of Nashville)
  • In 2008, the Metro Nashville School Board approved sexual orientation and gender identity protections for students and staff.

For those concerned about a Tennessee Bathroom Bill, please know that Tennessee has never passed the bathroom bill, it gets killed in process every time it comes up for a vote, including this past March. There is no “Bathroom Bill” in the state of Tennessee. There are also all-gender restrooms offered at the Nashville Music City Center for use during DrupalCon. We understand people's concern with a state that submits this kind of law for consideration. We can possibly all relate to the idea that the actions of lawmakers are not always representative of the greater population, particularly in the greater population of a metro area, and Nashville shares this same concern.

At our core, the Drupal Association believes in community, collaboration, and openness. We work hard throughout the process of DrupalCon planning to be sure that not only the complicated logistics are addressed, but also an accessible space for everyone in our community to feel safe, welcome and comfortable.

In addition to our core DrupalCon programming, we also include the following services at DrupalCon for those who need it.

  • Our Code of Conduct
  • Registration grants and scholarships
  • Interpreters (for the hard of hearing)
  • Special meals: Kosher, Halal, vegan, vegetarian, gluten-free, etc
  • New mother’s room
  • Quiet room and prayer space
  • Venue accessibility and mobility assistance
  • Local AA Meeting information
  • Speaker inclusion fund
  • No-photograph lanyards and communication preference stickers
  • All-gender restrooms
  • Women in Drupal events
  • Inclusion BOFs
  • On-site contacts for incident reporting

You can learn more about all of these services on our DrupalCon Nashville website under On-site Resources.

We believe, despite the current legislative challenges that the City of Nashville is working to overcome at a state level, that we will have a safe, diverse, celebratory space for our community in Nashville this spring. We’re excited to bring DrupalCon to the city of Nashville, and we’re confident it will be an amazing event.

We want to hear about your experiences at DrupalCon and in the cities we visit. Please participate in our post-Con surveys so that we can follow with both our internal teams and host cities if there are areas where the events can be improved for attendees.

aleksip.net: Data inheritance in Pattern Lab

Main Drupal Feed - Thu, 01/25/2018 - 12:26
When Pattern Lab renders a pattern, it does not by default include the data for any included patterns. There are plugins that can be used to include this data, but the many different ways to include patterns within another and to implement data inheritance can cause confusion.

erdfisch: Drupalcon mentored core sprint - part 3 - what happens next?

Main Drupal Feed - Thu, 01/25/2018 - 11:36
Drupalcon mentored core sprint - part 3 - what happens next? 25.01.2018 Michael Lenahan Body: 

Hi there! This is the third and final part of a series of blog posts about the Drupal Mentored Core Sprint, which traditionally takes place every Friday at Drupalcon.

If you want to read what came before, here you go:
Part one is here
Part two is here

In this blog post, I would like to show you a little of what happens behind the scenes at the Drupalcon Friday contribution sprint.

The live core commit

The day is completed by the core live commit. This is where one issue that was worked on during the day is committed to Drupal's git repository.

In Vienna, the issue that got committed was https://www.drupal.org/node/2912636, the contributors on Friday were gido and wengerk. They were mentored by the wonderful valthebald, who we met in part two.

This is the moment, when lauriii committed the code to the 8.5.x branch of Drupal, ably assisted by webchick:

Here's the thing about the live commit: anybody in the room could have been up there on stage. Behind the scenes, the mentoring team has been working hard with the core committers to ensure that a commit can be safely made. This is a difficult task: Drupal is a complicated system, it's interesting to see just how much thought needs to go into a seemingly simple commit.

Below is a list of some other issues that were worked on during the Friday sprint at Vienna. Some have since been committed, others still being worked on, even now. The point here is that progress was made on these issues and new contributors helped to move them forward (take a look at what happened in these issues on 29 September, 2017):

Coding Standards
DbLog erroring
SettingsTray disappearing
Add @internal to Form classes
Table drag
Batch missing title on screen
Url alias for private file uploads
Remove #size
Views DISTINCT multilingual
Toolbar uncacheable page
spelling"therefor"

The live commit is a chance for us to celebrate the success of one team, but really all those who worked on the issues above deserve to be celebrated. Our measure for how successful the day has been is whether or not the participants return to the issues after the day is over, and keep using their contribution skills.

Sign up to be a mentor

Are you coming to Nashville? Are you thinking, "maybe I have the skills to be a mentor"? That's great!

Sign up to be a mentor here.

After that, you will get regular emails with instructions on how to prepare for the Mentored Core Sprint.

Don't feel that you need to know the answers to everything in order to be a mentor. You will always have other mentors around you, people you can ask for help when you get stuck.

In the Mentored Core Sprint, we are using a really well-tested process, which we have refined and improved over many years.

The key thing to remember is this: you don't need to fix the issue for the participants. Your job is to teach them how the issue queue works.

Understanding the value of finding the solution is far more important than finding the solution itself.

What to do at Drupalcon

In the exhibition hall, there is a Mentors' Table. Go and say hello, it's a good place to hang out. We have stickers for you, and mentoring cards explaining all the different tasks on offer ...

Keep an eye out on the BoFs board during the week. There are special meetings to prepare first-time mentors, plus a meeting to do issue triage to determine good Novice issues.

Here's a clue: Novice does NOT mean trivial or easy. It means that the steps on the issue are well-defined, and actionable.

Then, show up bright and early on sprint day and have a great time

You'll be wearing the best t-shirt in town.

Here is Rachel, briefing the team before the day starts.

Every year, after it's all over, we meet at a nice restaurant for the mentors' dinner. Thank you to some wonderful companies in the Drupal Community sponsored us last September in Vienna.

So, that's a wrap!

There's a lot more to be said on this topic, but I'll leave it there. I hope I've been able to persuade you to give the Friday core sprint a try, as a participant or as a mentor. It's worth it.

If you're going to Nashville (lucky you), then make sure you stay for the Friday as well.

We're currently planning Drupal Europe. We will most definitely include a Mentored Contribution Day! See you there!

Credit to Amazee Labs and Roy Segall for use of photos from the Drupalcon Vienna flickr stream, made available under the CC BY-NC-SA 2.0 licence.

Schlagworte/Tags:  planet drupal-planet drupalcon mentoring code sprint Ihr Name Kommentar/Comment Kommentar hinzufügen/Add comment Leave this field blank

Lullabot: Local Drupal Development Roundup

Main Drupal Feed - Wed, 01/24/2018 - 20:35

If you’d asked me a decade ago what local setup for web development would look like, I would have guessed “simpler, easier, and turn-key”. After all, WAMP was getting to be rather usable and stable on Windows, Linux was beginning to be preinstalled on laptops, and Mac OS X was in its heyday of being the primary focus for Apple.

Today, I see every new web developer struggle with just keeping their locals running. Instead of consolidation, we’ve seen a multitude of good options become available, with no clear “best” choice. Many of these options require a strong, almost expert-level of understanding of *nix systems administration and management. Yet, most junior web developers have little command line experience or have only been exposed to Windows environments in their post-secondary training.

What’s a developer lead to do? Let's review the options available for 2018!

1. The stack as an app: *AMP and friends

In this model, a native application is downloaded and run locally. For example, MAMP contains an isolated stack with Apache, PHP, and MySQL compiled for Windows or macOS. This is by far the simplest way to get a local environment up and running for Mac or Windows users. It’s also the easiest to recover from when things go wrong. Simply uninstall and reinstall the app, and you’ll have a clean slate.

However, there are some significant limitations. If your PHP app requires a PHP extension that’s not included, adding it in by hand can be difficult. Sometimes, the configuration they ship with can deviate from your actual server environments, leading to the “it works on my local but nowhere else” problem. Finally, the skills you learn won’t apply directly to production environments, or if you change operating systems locally.

2. Native on the workstation

This style of setup involves using the command line to install the appropriate software locally. For example, Mac users would use Homebrew and Homebrew-PHP to install Apache, PHP, and MySQL. Linux users would use apt or yum - which would be similar to setting up on a remote server. Windows users have the option of the Linux subsystem now available in Windows 10.

This is slightly more complicated than an AMP application as it requires the command line instead of using a GUI dashboard. Instead of one bundle with “everything”, you have to know what you need to install. For example, simply running apt install php won’t give you common extensions like gd for image processing. However, once you’ve set up a local this way, you will have immediately transferable skills to production environments. And, if you need to install something like the PHP mongodb or redis extensions, it’s straightforward either through the package manager or through pecl.

Linux on the Laptop

Running a Linux distribution as your primary operating system is a great way to do local development. Everything you do is transferable to production environments, and there are incredible resources online for learning how to set everything up. However, the usual caveats around battery life and laptop hardware availability for Linux support remain.

3. Virtual Machines

Virtual machines are actually really old technology—older than Unix itself. As hardware extensions for virtualization support and 4GB+ of RAM became standard in workstations, running a full virtual machine for development work (and not just on servers) became reasonable. With 8 or 16GB of memory, it’s entirely reasonable to run multiple virtual machines at once without a noticeable slowdown.

VirtualBox is a broadly used, free virtual machine application that runs on macOS, Linux, and Windows. Using virtual machines can significantly simplify local development when working on significantly different sites. Perhaps one site is using PHP 5.6 with MySQL, and another is using PHP 7.1 with MariaDB. Or, another is running something entirely different, like Ruby, Python, or even Windows and .Net. Having virtual machines lets you keep the environment separate and isolated.

However, maintaining each environment can take time. You have to manually copy code into the virtual machine, or install a full environment for editing code. Resetting to a pristine state takes time.

Vagrant

Clearly, there were advantages in using virtual machines—if only they were easier to maintain! This is where Vagrant comes in. For example, instead of spending time adding a virtual machine with a wizard, and manually running an OS installer, Vagrant makes initial setup as easy as vagrant up.

Vagrant really shines in my work as an architect, where I’m often auditing a few different sites at the same time. I may not have access to anything beyond a git repository and a database dump, so having a generic, repeatable, and isolated PHP environment is a huge time saver.

Syncing code into a VM is something Vagrant handles out of the box, with support for NFS on Linux and macOS hosts, SMB on Windows hosts, and rsync for anywhere. This saves from having to maintain multiple IDE and editor installations, letting those all live on your primary OS.

Of course, someone has to create the initial virtual machine and configure it into something called a “base box”. Conceptually, a base box is what each Vagrant project forks off of, such as ubuntu/zesty. Some developers prefer to start with an OS-only box, and then use a provisioning tool like Ansible or Puppet to add packages and configure them. I’ve found that’s too complicated for many developers, who just want a straightforward VM they can boot and edit. Luckily, Vagrant also supports custom base boxes with whatever software you want baked in.

For Drupal development, there’s DrupalVM or my own provisionless trusty-lamp base box. You can find more base boxes on the Vagrant website.

4. Docker

In many circles, Docker is the “one true answer” for local development. While Docker has a lot of promise, in my experience it’s also the most complicated option available. Docker uses APIs that are part of the Linux kernel to run containers, which means that Docker containers can’t run straight under macOS or Windows. In those cases, a lightweight virtual machine is run, and Docker containers are run inside of that. If you’re already using Docker in production (which is its own can of worms), then running Docker for locals can be a huge win.

Like a virtual machine, somehow your in-development code has to be pushed inside of the container. This has been a historical pain point for Docker, and can only be avoided by running Linux as your primary OS. docker-sync is probably the best solution today until the osxfs driver gets closer to native performance. Using Linux as your primary operating system will give you the best Docker experience, as it can use bind mounts which have no performance impact.

I’ve heard good things about Kalabox, but haven’t used it myself. Kalabox works fine today but is not being actively developed, in favor of Lando, a CLI tool. Pantheon supports taking an existing site and making it work locally through a Kalabox plugin. If your hosting provider offers tooling like that, it’s worth investigating before diving too deeply into other options.

I did some investigation recently into docker4drupal. It worked pretty well in my basic setup, but I haven’t used it on a real client project for day-to-day work. It includes many optional services that are disabled out of the box but may be a little overwhelming to read through. A good strategy to learn how Docker works is to build a basic local environment by hand, and then switch over to docker4drupal to save having to maintain something custom over the long run.

ddev is another “tool on top of docker” made by a team with ties to the Drupal community. It was easy to get going for a basic Drupal 8 site. One interesting design decision is to store site files and database files outside of Docker, and to require a special flag to remove them. While this limits some Docker functionality (like snapshotting a database container for update hook testing), I’ve seen many developers lose an hour after accidentally deleting a container. If they keep focusing on these common pain points, this could eventually be one of the most friendly Docker tools to use.

One of the biggest issues with Docker on macOS is that by default, it stores all containers in a single disk image limited to 64GB of space. I’ve seen developers fill this up and completely trash all of their local Docker instances. Deleting containers often won’t recover much space from this file, so if your Mac is running out of disk space you may have to reset Docker entirely to recover the disk space.

When things go wrong, debugging your local environment with Docker requires a solid understanding of an entire stack of software: Shells in both your host and your containers, Linux package managers, init systems, networking, docker-compose, and Docker itself.

I have worked with a few clients who were using Docker for both production and local development. In one case, a small team with only two developers ended up going back to MAMP for locals due to the complexity of Docker relative to their needs. In the other case, I found it was faster to pull the site into a Vagrant VM than to get their Docker containers up and running smoothly. What’s important is to remember that Docker doesn’t solve the scripting and container setup for you—so if you decide to use Docker, be prepared to maintain that tooling and infrastructure. The only thing worse than no local environment automation is automation that’s broken.

At Lullabot, we use Docker to run Tugboat, and for local development of lullabot.com itself. It took some valiant efforts by Sally Young, but it’s been fairly smooth since we transitioned to using docker-sync.

What should your team use?

Paraphrasing what I wrote over in the README for the trusty-lamp basebox:

Deciding what local development environment to choose for you and your team can be tricky. Here are three options, ordered in terms of complexity:

  1. Is your team entirely new to PHP and web development in general? Consider using something like MAMP instead of Vagrant or Docker.
  2. Does your team have a good handle on web development, but are running into the limitations of running the site on macOS or Windows? Does your team have mixed operating systems including Windows and Linux? Consider using Vagrant to solve all of these pain points.
  3. Is your team using Docker in production, or already maintaining Dockerfiles? If so, consider using docker4drupal or your production Docker containers locally.

Where do you see local development going in 2018? If you had time to completely reset from scratch, what tooling would you use? Let us know in the comments below.

myDropWizard.com: Use the Backup and Migrate module in Drupal 6? Audit your permissions!

Main Drupal Feed - Wed, 01/24/2018 - 19:20

As you may know, Drupal 6 has reached End-of-Life (EOL) which means the Drupal Security Team is no longer doing Security Advisories or working on security patches for Drupal 6 core or contrib modules - but the Drupal 6 LTS vendors are and we're one of them!

Today, a security update for the Backup and Migrate module for Drupal 7 was released for a Critical issue that could allow arbitrary PHP execution - see the security advisory.

While arbitrary PHP execution is scary, this issue is actually about the permissions provided by the Backup and Migrate module not being marked as potentially dangerous. The new release simply marks those permissions appropriately.

There won't be a security release for this issue for Drupal 6!

This is because Drupal 6 doesn't provide a way to mark permissions as dangerous. It doesn't even allow a separate description for the permissions, which we could use to call out the danger (the machine name used in code is the same as the name shown to users - this is no longer the case in Drupal 7 and newer).

However, marking the permissions as dangerous isn't the real fix! The real fix is auditing your permissions to "verify only trusted users are granted permissions defined by the module."

This is something you can do with Drupal 6, even without a new release. :-)

So, in summary: no security release for Drupal 6 - go audit your permissions.

If you'd like all your Drupal 6 modules to receive security updates and have the fixes deployed the same day they're released, please check out our D6LTS plans.

Note: if you use the myDropWizard module (totally free!), you'll be alerted to these and any future security updates, and will be able to use drush to install them (even though they won't necessarily have a release on Drupal.org).

BYU AMP Theme

Drupal Themes - Wed, 01/24/2018 - 19:20
BYU AMP Theme

Subtheme of [AMP Theme](https://drupal.org/project/amptheme) for Drupal 8.

This is an AMP version of the BYU theme. Due to the nature of AMP, it is no where near as flexible as the normal BYU Drupal 8 theme. **You should not and cannot use this as a replacement of that theme.**

This theme requires much of the editing to be done in the code, not in theme settings. It does not use web components (AMP does not allow custom compoents).

Daniel Pocock: apt-get install more contributors

Main Drupal Feed - Wed, 01/24/2018 - 11:21

Every year I participate in a number of initiatives introducing people to free software and helping them make a first contribution. After all, making the first contribution to free software is a very significant milestone on the way to becoming a leader in the world of software engineering. Anything we can do to improve this experience and make it accessible to more people would appear to be vital to the continuation of our communities and the solutions we produce.

During the time I've been involved in mentoring, I've observed that there are many technical steps in helping people make their first contribution that could be automated. While it may seem like creating SSH and PGP keys is not that hard to explain, wouldn't it be nice if we could whisk new contributors through this process in much the same way that we help people become users with the Debian Installer?

Paving the path to a first contribution

Imagine the following series of steps:

  1. Install Debian
  2. apt install new-contributor-wizard
  3. Run the new-contributor-wizard (sets up domain name, SSH, PGP, calls apt to install necessary tools, procmail or similar filters, join IRC channels, creates static blog with Jekyll, ...)
  4. write a patch, git push
  5. write a blog about the patch, git push

Steps 2 and 3 can eliminate a lot of "where do I start?" head-scratching for new contributors and it can eliminate a lot of repetitive communication for mentors. In programs like GSoC and Outreachy, where there is a huge burst of enthusiasm during the application process (February/March), will a tool like this help a higher percentage of the applicants make a first contribution to free software? For example, if 50% of applicants made a contribution last March, could this tool raise that to 70% in March 2019? Is it likely more will become repeat contributors if their first contribution is achieved more quickly after using a tool like this? Is this an important pattern for the success of our communities? Could this also be a useful stepping stone in the progression from being a user to making a first upload to mentors.debian.net?

Could this wizard be generic enough to help multiple communities, helping people share a plugin for Mozilla, contribute their first theme for Drupal or a package for Fedora?

Not just for developers

Notice I've deliberately used the word contributor and not developer. It takes many different people with different skills to build a successful community and this wizard will also be useful for people who are not writing code.

What would you include in this wizard?

Please feel free to add ideas to the wiki page.

All projects really need a couple of mentors to support them through the summer and if you are able to be a co-mentor for this or any of the other projects (or even proposing your own topic) now is a great time to join the debian-outreach list and contact us. You don't need to be a Debian Developer either and several of these projects are widely useful outside Debian.

miggle: learning Drupal in a week - my first job experience

Main Drupal Feed - Wed, 01/24/2018 - 09:38
learning Drupal in a week - my first job experiencefriends of miggle Wed, 24/01/2018 - 09:38 Upon arriving I was welcomed to the office and settled in at a desk. Initially, I was tasked with exploring Drupal and what it could do. Acquia Dev Desktop was the first application I opened and after experimenting with some of the prebuilt sites I began to gather an understanding of Drupal and why it is used. 

INsReady: Single Sign-on using OAuth2 and JWT for Distributed Architecture

Main Drupal Feed - Wed, 01/24/2018 - 05:35

Single sign-on (SSO) is a property, where a user logs in with a single ID and password to gain access to a connected system or systems without using different usernames or passwords, or in some configurations seamlessly sign on at each system. A simple version of single sign-on can be achieved over IP networks using cookies but only if the sites share a common DNS parent domain. ---- https://en.wikipedia.org/wiki/Single_sign-on

As the definition suggests, one can imagine that SSO becomes one critical part of the system design and user experience design for complex and distributed system, or for a new application to integrate with the existing connected system. With SSO enabled, a system owner can manage access control at a centralized place, therefore granting users permissions cross multiple subsystem is organized. On the other hand, as an end user, he/she only needs to secure one set of credentials to access multiple resources or to access functionalities whose distributed architecture is hidden from the user.

As we entering 2018, our software becomes more complex and its services become more ubiquitous. Let's use Google's SSO for example to illustrate the demand for a modern SSO:

  • A user can sign in with password once for both Gmail.com and YouTube.com
  • A user can go to Feedly.com or New York Times and use the "Sign-in with Google" to authorize third parties to access the user's data
  • A user can sign in with password on a mobile device to sync all photos or contacts from Google
  • A Google Home device can connect to multiple people's Google accounts, and read out their calendar events when needed
  • YouTube.com developers can use Polymer as frontend technology, and authenticate with YouTube.com backend to load the content via web services API

You might not realize the complexity of such system to support the modern use cases above until your system needs one, and you need to develop the support. Let's translate the above use cases into SSO technical requirements:

  • Support SSO cross multiple domains
  • Support Password Grant (sing-in directly on the web), Authorization Code Grant (user authorizes third-party), Client Credentials Grant (Machine sign-in), and Implicit Grant (third-party web app sign-in)
  • Support distributed architecture, where your authentication server is not necessary on the same domain or at the same server as your resource servers
  • Web services API on resources server can effectively authenticate requests
  • No technology lock-in for authentication server, resource servers as well as client-side apps.
  • Support a seamless user authorization experience cross different client-side technology (Web, Mobile or IoT), and cross different first-party and third-party applications

Fortunately, we can leverage existing open standards and open source software to implement a SSO for a distributed system. First, we will rely on OAuth 2.0 Authorization Framework and JSON Web Token (JWT) open protocols. OAuth 2.0 is used to support common authentication workflows; in fact, the above 4 types of grants in the requirements are the terminologies borrowed from OAuth 2.0 protocol. JWT protocol is used to standardize the sharing of a successful authentication result cross clients apps and resources servers. The protocol allows resources server to trust a client request without double checking with authentication server, which lowers the amount of communication within a distributed system, therefore increases the performance of overall authentication and identification. For more technical details on how to use OAuth 2.0 and JWT for authentication, please see Stateless authentication with OAuth 2 and JWT - JavaZone 2015.

Regarding to building the authentication sever, where all users and machines will sign-in, authenticate, authorize, or identify themselves, the critical requirement for the authentication server is that this server implements OAuth 2.0 protocol and use JWT as the bearer token. As long as the authentication server implements the protocols, the rest of facilitating features can be built on any technology. I like use simple_oauth module with Drupal 8, because out-of-box, this solution is the whole application, including users, consumers and tokens management. Particularly, I have been helping to optimize the user experience of user authorization process for different use cases. If you are not familiar with Drupal, a particular distribution Contenta CMS has pre-packaged simple_oauth and its dependencies for you.

Once the authentication server is in place, we will implement the protocol and workflows on resource server and client-side apps. This part is largely up to your resource server and client-side technologies you picked. We are building this part of integration with Node.js, Laraval, Drupal 7 and Drupal 8 applications. As the time of writing, we have published the module oauth2_jwt_sso on Drupal 8.

I leave the extensibility, limitation, and more technical details of this SSO solution for the upcoming DrupalCon Nashville session. I will include the session video here in late Apri, 2018.

Files:  SSO diagram.pngTag: SSOOAuth2JWTDecoupledDistributedArchitectureSecurityDrupal Planet

PreviousNext: Better image optimisation in Drupal

Main Drupal Feed - Wed, 01/24/2018 - 03:08

When optimising a site for performance, one of the options with the best effort-to-reward ratio is image optimisation. Crunching those images in your Front End workflow is easy, but how about author-uploaded images through the CMS?

by Tony Comben / 24 January 2018

Recently, a client of ours was looking for ways to reduce the size of uploaded images on their site without burdening the authors. To solve this, we used the module Image Optimize which allows you to use a number of compression tools, both local and 3rd party.

The tools it currently supports include:

We decided to avoid the use of 3rd party services, as processing the images on our servers could reduce processing time (no waiting for a third party to reply) and ensure reliability.

Picking your server-side compression tool

In order to pick the tools which best served our we picked an image that closely represented the type of image the authors often used. We picked an image featuring a person’s face with a complex background - one png and one jpeg, and ran it through each of the tools with a moderately aggressive compression level.

PNG Results Compression Library Compressed size Percentage saving Original (Drupal 8 default resizing) 234kb - AdvPng 234kb 0% OptiPng 200kb 14.52% PngCrush 200kb 14.52% PngOut 194kb 17.09% PngQuant 63kb 73.07% Compression Library Compressed size Percentage saving Original 1403kb - AdvPng 1403kb 0% OptiPng 1288kb 8.19% PngCrush 1288kb 8.19% PngOut 1313kb 6.41% PngQuant 445kb 68.28% JPEG Results Compression Library Compressed size Percentage saving Original (Drupal 8 default resizing) 57kb - JfifRemove 57kb 0% JpegOptim 49kb 14.03% JpegTran 57kb 0% Compression Library Compressed size Percentage saving Original 778kb - JfifRemove 778kb 0% JpegOptim 83kb 89.33% JpegTran 715kb 8.09%

Using a combination of PngQuant and JpegOptim, we could save anywhere between 14% and 89% in file size, with larger images bringing greater percentage savings.

Setting up automated image compression in Drupal 8

The Image Optimize module allows us to set up optimisation pipelines and attach them to our image styles. This allows us to set both site-wide and per-image style optimisation.

After installing the Image Optimize module, head to the Image Optimize pipelines configuration (Configuration > Media > Image Optimize pipeline) and add a new optimization pipeline.

Now add the PngQuant and JpegOptim processors. If they have been installed to the server Image Optimize should pick up their location automatically, or you can manually set the location if using a standalone binary.

JpegOptim has some additional quality settings, I’m setting “Progressive” to always and “Quality” to a sweet spot of 60. 70 could also be used as a more conservative target.

The final pipeline looks like the following:

Back to the Image Optimize pipelines configuration page, we can now set the new pipeline as the sitewide default:

And boom! Automated sitewide image compression!

Overriding image compression for individual image styles

If the default compression pipeline is too aggressive (or conservative) for a particular image style, we can override it in the Image Styles configuration (Configuration > Media > Image styles). Edit the image style you’d like to override, and select your alternative pipeline:

Applying compression to existing images

Flushing the image cache will recreate existing images with compression the next time the image is loaded. This can be done with the drush command 

drush image-flush --all

Conclusion

Setting up automated image optimisation is a relatively simple process, with potentially large impacts on site performance. If you have experience with image optimisation, I would love to hear about it in the comments.

Tagged Image Optimisation

MidCamp - Midwest Drupal Camp: We are pleased to announce Chris Rooney will be our keynote speaker at MidCamp 2018

Main Drupal Feed - Wed, 01/24/2018 - 00:51
We are pleased to announce Chris Rooney will be our keynote speaker at MidCamp 2018

We are so excited to have Chris as our keynote speaker this year.  He is the President and Founder of Digital Bridge Solutions, a Drupal and Magento Agency here in Chicago that has been a supporter of MidCamp since its inception. 

His presentation at our 2017 event, Whitewashed - Drupal's Diversity Problem And How To Solve It, was a deep, and eye-opening look at diversity in Drupal, and the greater tech world, and how we can go about making it better.

Since then, he has been partnered with Palantir.net on an ambitious inclusion initiative working with students to introduce them to Drupal.  Last year, they brought a group of students from Baltimore to DrupalCon Baltimore.  They have held Drupal training sessions here in Chicago, and are currently working to bring students from Genesys Works and NPower to DrupalCon Nashville.

Chris' presentation will be a collective group journey into sensitive and vulnerable territories, but promises interactivity, a safe space for the exchange of ideas, and perhaps even a little humor.  We hope you join us for it.

Session Submissions close Friday!

MidCamp is looking for folks just like you to speak to our Drupal audience! Experienced speakers are always welcome, but our camp is also a great place to start for first-time speakers.

MidCamp is soliciting sessions geared toward beginner through advanced Drupal users. Know someone who might be a new voice, but has something to say? Please suggest they submit a session.

Find out more at: Buy a Ticket

Tickets and Individual Sponsorships are available on the site for MidCamp 2018.

Click here to get yours!

Schedule of Events
  • Thursday, March 8th, 2018 - Training and Sprints
  • Friday, March 9th, 2018 - Sessions and Social
  • Saturday, March 10th, 2018 - Sessions and Social
  • Sunday, March 11th, 2018 - Sprints
Sponsor MidCamp 2018!

Are you or your company interested in becoming a sponsor for the 2018 event? Sponsoring MidCamp is a great way to promote your company, organization, or product and to show your support for Drupal and the Midwest Drupal community. It also is a great opportunity to connect with potential customers and recruit talent.

Find out more at:

Volunteer for MidCamp 2018

Want to be part of the MidCamp action? We're always looking for volunteers to help out during the event.  We need registration table help, room monitors, help with setting up the venue, and help clearing out.  Sign up at http://bit.ly/midcamp-volunteer-signup and we'll be in touch shortly!

We hope you'll join us at MidCamp 2018!

Dcycle: Caching a Drupal 8 REST resource

Main Drupal Feed - Wed, 01/24/2018 - 00:00

Here are a few things I learned about caching for REST resources.

There are probably better ways to accomplish this, but here is what works for me.

Let’s say we have a rest resource that looks something like this in my_module/src/Plugin/rest/resource/MyRestResource.php and we have enabled it using the Rest UI module and given anonymous users permission to view it:

<?php namespace Drupal\my_module\Plugin\rest\resource; use Drupal\rest\ResourceResponse; /** * This is just an example. * * @RestResource( * id = "this_is_just_an_example", * label = @Translation("Display the title of node 1"), * uri_paths = { * "canonical" = "/api/v1/get" * } * ) */ class MyRestResource extends ResourceBase { /** * {@inheritdoc} */ public function get() { $node = node_load(1); $response = new ResourceResponse( [ 'title' => $node->getTitle(), 'time' => time(), ] ); return $response; } }

Now, we can visit http://example.localhost/api/v1/get?_format=json and we will see something like:

{"title":"Some Title","time":1516803204}

Reloading the page, ‘time’ stays the same. That means caching is working; we are not re-computing our Json output each time someone requests it.

How to invalidate the cache when the title changes.

If we edit node 1 and change its title to, say, “Another title”, and reload http://example.localhost/api/v1/get?_format=json, we’ll see the old title. To make sure the cache is invalidated when this happens, we need to provide cacheability metadata to our response telling it when it needs to be recomputed.

Our node, when it’s loaded, contains within it all the caching metadata needed to describe when it should be recomputed: when the title changes, when new filters are added to the text format that’s being used, etc. We can add this information to our ResourceResponse like this:

... $response->addCacheableDependency($node); return $response; ...

When we clear our cache with drush cr and reload our page, we’ll see something like:

{"title":"Another title","time":1516804411}

We know this is still cached because the time stays the same no matter how often we load the page. Try it, it’s fun!

Even more fun is changing the title of node 1 and reloading our Json page, and seeing the title change without clearing the cache:

{"title":"Yet another title","time":1516804481} How to set custom cache invalidation events

Let’s say you want to trigger a cache rebuild for some reason other than those defined by the node itself (title change, etc.).

A real-world example might be events: an “upcoming events” page should only display events which start later than now. If we invalidate the cache every day, then we’ll never show yesterday’s events in our events feed. Here, we need to add our custom cache invalidation event, in this case “rebuild events feed”.

For the purpose of this demo, we won’t actually build an events feed, but we’ll see how cron might be able to trigger cache invalidation.

Let’s add the following code to our response:

... use Drupal\Core\Cache\CacheableMetadata; ... $response->addCacheableDependency($node); $response->addCacheableDependency(CacheableMetadata::createFromRenderArray([ '#cache' => [ 'tags' => [ 'rebuild-events-feed', ], ], ])); return $response; ...

This uses Drupal’s cache tags concept and tells Drupal that when the cache tag ‘rebuild-events-feed’ is invalidated, all cacheable responses which have that cache tag should be invalidated as well. I prefer this to the ‘max-age’ cache tag because it allows us more fine-grained control over when to invalidate our caches.

On cron, we could only invalidate ‘rebuild-events-feed’ if events have passed since our last invalidation of that tag, for example.

For this example, we’ll just invalidate it manually. Clear your cache to begin using the new code (drush cr), then load the page, you will see something like:

{"hello":"Yet another title","time":1516805677}

As always, the time remains the same no matter how many times you reload the page.

Let’s say you are in the midst of a cron run and you have determined that you need to invalidate your cache for response which have the cache tag ‘rebuild-events-feed’, you can run:

\Drupal::service('cache_tags.invalidator')->invalidateTags(['rebuild-events-feed'])

Let’s do it in Drush to see it in action:

drush ev "\Drupal::service('cache_tags.invalidator')->\ invalidateTags(['rebuild-events-feed'])"

We’ve just invalidated our ‘rebuild-events-feed’ tag and, hence, Responses that use it.

The dreaded “leaked metadata” error

This one is beyond my competence level, but I wanted to mention it anyway.

Let’s say you want to output your node’s URL to Json, you might consider computing it using $node->toUrl()->toString(). This will give us “/node/1”.

Let’s add it to our code:

... 'title' => $node->getTitle(), 'url' => $node->toUrl()->toString(), 'time' => time(), ...

This results in a very ugly error which completely breaks your site (at least at the time of this writing): “The controller result claims to be providing relevant cache metadata, but leaked metadata was detected. Please ensure you are not rendering content too early.”.

The problem, it seems, is that Drupal detects that the URL object, like the node we saw earlier, contains its own internal information which tells it when its cache should be invalidated. Converting it to a string prevents the Response from being informed about that information somehow (again, if someone can explain this better than me, please leave a comment), so an exception is thrown.

The ‘toString()’ function has an optional parameter, “$collect_bubbleable_metadata”, which can be used to get not just a string, but also information about its cache should be invalidated. In Drush, this will look like something like:

drush ev 'print_r(node_load(1)->toUrl()->toString(TRUE))' Drupal\Core\GeneratedUrl Object ( [generatedUrl:protected] => /node/1 [cacheContexts:protected] => Array ( ) [cacheTags:protected] => Array ( ) [cacheMaxAge:protected] => -1 [attachments:protected] => Array ( ) )

This changes the return type of toString(), though: toString() no longer returns a string but a GeneratedUrl, so this won’t work:

... 'title' => $node->getTitle(), 'url' => $node->toUrl()->toString(TRUE), 'time' => time(), ...

It gives us the error “Could not normalize object of type Drupal\Core\GeneratedUrl, no supporting normalizer found”.

ohthehugemanatee commented on Drupal.org on how to fix this. Integrating his suggestion, our code now looks like:

... $url = $node->toUrl()->toString(TRUE); $response = new ResourceResponse( [ 'title' => $node->getTitle(), 'url' => $url->getGeneratedUrl(), 'time' => time(), ] ); $response->addCacheableDependency($node); $response->addCacheableDependency($url); ...

This will now work as expected.

With all the fun we’re having, though let’s take this a step further, let’s say we want to export the feed of frontpage items in our Response:

$url = $node->toUrl()->toString(TRUE); $view = \Drupal\views\Views::getView("frontpage"); $view->setDisplay("feed_1"); $view_render_array = $view->render(); $rendered_view = render($view_render_array); $response = new ResourceResponse( [ 'title' => $node->getTitle(), 'url' => $url->getGeneratedUrl(), 'view' => $rendered_view, 'time' => time(), ] ); $response->addCacheableDependency($node); $response->addCacheableDependency($url); $response->addCacheableDependency(CacheableMetadata::createFromRenderArray($view_render_array));

You will not be surpised to see the “leaked metadata was detected” error again… In fact you have come to love and expect this error at this point.

Here is where I’m completely out of my league; according to Crell, “[i]f you [use render() yourself], you’re wrong and you should fix your code “, but I’m not sure how to get a rendered view without using render() myself… I’ve implemented a variation on a comment on Drupal.org by mikejw suggesting using different render context to prevent Drupal from complaining.

$view_render_array = NULL; $rendered_view = NULL; \Drupal::service('renderer')->executeInRenderContext(new RenderContext(), function () use ($view, &$view_render_array, &$rendered_view) { $view_render_array = $view->render(); $rendered_view = render($view_render_array); });

If we check to make sure we have this line in our code:

$response->addCacheableDependency(CacheableMetadata::createFromRenderArray($view_render_array));

we’re telling our Response’s cache to invalidate whenever our view’s cache invaliates. So, for example, if we have several nodes promoted to the front page in our view, we can modify any one of them and our entire Response’s cache will be invalidated and rebuilt.

Resources and further reading

Here are a few things I learned about caching for REST resources.

Drupal.org Featured Case Studies: Chicago Park District Website

Main Drupal Feed - Tue, 01/23/2018 - 21:39
Completed Drupal site or project URL: https://www.chicagoparkdistrict.com/

The Chicago Park District owns more than 8,800 acres of green space, making it the largest municipal park manager in the nation. The Chicago Park District’s more than 600 parks offer thousands of sports and physical activities as well as cultural and environmental programs for youth, adults, and seniors. The Chicago Park District is also responsible for 28 indoor pools, 50 outdoor pools, and 26 miles of lakefront including 23 swimming beaches plus one inland beach.

Clarity redesigned, built, and hosts the official website for the Chicago Park District (CPD). Clarity designed and developed this user-friendly, mobile-responsive site, with a unified look and feel and marketing emphasis to promote CPD’s parks, programs, and events. The new website acts as a solution focused on its customers – “front end” visitors of the website and “back end” content administers – both of whom have a wide scope of needs.

Specifically, the new site provides the following improvements and features:

  • New Content Management System (CMS) Platform
    • Drupal 8, the latest version of the popular open-source framework;
    • Allows CPD to more easily integrate and connect to third-party tools, such as
      • ActiveNet, which provides externally-hosted ecommerce functions;
      • AppliTrack, which provides job postings;
      • Bonfire, which provides procurement and contracting opportunities;
      • MailChimp, which provides newsletter signup capabilities.
  • Updated design based on user focus group reactions to the old site, including
    • A cleaner, refreshed look built for devices of all sizes;
    • Home page updates that allow CPD staff to push more information in a more organized fashion;
    • Larger emphasis on maps (hugely important for such a large metropolitan area);
    • The ability to highlight features and attractions, such as artworks and natural areas, that CPD has to offer both residents and visitors;
    • Overall increased speed and performance.
  • Improved administrative functions that allow for
    • Distributed content responsibilities;
    • Workflow approvals to ensure editorial integrity;
    • More modular administrative tools allowing CPD to highlight location details such as accessibility features

With its new site, Chicago Park District is now poised to better serve the long-term needs of residents and visitors for years to come.

Web Wash: Getting Started with Bootstrap in Drupal 8

Main Drupal Feed - Tue, 01/23/2018 - 16:00
Bootstrap is a front-end framework for building websites. It ships prebuilt CSS and JavaScript components that make building sites fast. It comes with all sorts of common components that every website needs such as a grid system, buttons, drop-down, responsive form elements, carousel (of course) and so much more. As a developer I don't want to spend time styling yet another button. I just want to know which CSS class to add to an tag so it looks like a button and I'm good to go. One complaint about Bootstrap is you can spot it a mile away because a lot of developers use the default look-and-feel. When you see the famous Jumbotron you know it's a Bootstrap site. But with a little bit of effort you can make your site look unique.

Aten Design Group: Using Address Fields in Configuration Forms

Main Drupal Feed - Tue, 01/23/2018 - 15:40

In Drupal 7, the Address Field module provided developers an easy way to collect complex address information with relative ease. You could simply add the field to your content type and configure which countries you support along with what parts of an address are needed. However, this ease was limited to fieldable entities. If you needed to collect address information somewhere that wasn’t a fieldable entity, you had a lot more work in store for you. Chances are good that the end result would be as few text fields as possible, no validation, and only supporting with a single country. If you were feeling ambitious, maybe you would have provided a select list with the states or provinces provided via a hardcoded array.

During my most recent Drupal 8 project I wanted to collect structured address information outside the context of an entity. Specifically, I wanted to add a section for address and phone number to the Basic Site Settings configuration page. As it turns out, the same functionality you get on entities is now also available to the Form API.

Address Field’s port to Drupal 8 came in the form of a whole new module, the Address module. With it comes a new address form element. Let’s use that to add a “Site Address” field to the Basic Settings. First we’ll implement hook_form_FORM_ID_alter() in a custom module’s .module file:

use Drupal\Core\Form\FormStateInterface;   function MYMODULE_form_system_site_information_settings_alter(&$form, FormStateInterface $form_state) { // Overrides go here... }

Don’t forget to add use Drupal\Core\Form\FormStateInterface; at the top of your file. Next, we’ll add a details group and a fieldset for the address components to go into:

function MYMODULE_form_system_site_information_settings_alter(&$form, FormStateInterface $form_state) { // Create our contact information section. $form['site_location'] = [ '#type' => 'details', '#title' => t('Site Location'), '#open' => TRUE, ];   $form['site_location']['address'] = [ '#type' => 'fieldset', '#title' => t('Address'), ]; }

Once the fieldset is in place, we can go ahead and add the address components. To do that you’ll first need to install the Address module and its dependencies. You’ll also need to add use CommerceGuys\Addressing\AddressFormat\AddressField; at the top of the file as we’ll need some of the constants defined there later.

use Drupal\Core\Form\FormStateInterface; use CommerceGuys\Addressing\AddressFormat\AddressField;   function MYMODULE_form_system_site_information_settings_alter(&$form, FormStateInterface $form_state) { // … detail and fieldset code …   // Create the address field. $form['site_location']['address']['site_address'] = [ '#type' => 'address', '#default_value' => ['country_code' => 'US'], '#used_fields' => [ AddressField::ADDRESS_LINE1, AddressField::ADDRESS_LINE2, AddressField::ADMINISTRATIVE_AREA, AddressField::LOCALITY, AddressField::POSTAL_CODE, ], '#available_countries' => ['US'], ]; }

There’s a few things we’re doing here worth going over. First we set '#type' => 'address', which the Address module creates for us. Next we set a #default_value for country_code to US. That way the United States specific field config is displayed when the page loads.

The #used_fields key allows us to configure which address information we want to collect. This is done by passing an array of constants as defined in the AddressField class. The full list of options is:

AddressField::ADMINISTRATIVE_AREA AddressField::LOCALITY AddressField::DEPENDENT_LOCALITY AddressField::POSTAL_CODE AddressField::SORTING_CODE AddressField::ADDRESS_LINE1 AddressField::ADDRESS_LINE2 AddressField::ORGANIZATION AddressField::GIVEN_NAME AddressField::ADDITIONAL_NAME AddressField::FAMILY_NAME

Without any configuration, a full address field looks like this when displaying addresses for the United States.

For our example above, we only needed the street address (ADDRESS_LINE1 and ADDRESS_LINE2), city (LOCALITY), state (ADMINISTRATIVE_AREA), and zip code (POSTAL_CODE).

Lastly, we define which countries we will be supporting. This is done by passing an array of country codes into the #available_countries key. For our example we only need addresses from the United States, so that’s the only value we pass in.

The last step in our process is saving the information to the Basic Site Settings config file. First we need to add a new submit handler to the form. At the end of our hook, let’s add this:

function MYMODULE_form_system_site_information_settings_alter(&$form, FormStateInterface $form_state) { // … detail and fieldset code …   // … address field code …   // Add a custom submit handler for our new values. $form['#submit'][] = 'MYMODULE_site_address_submit'; }

Now we’ll create the handler:

/** * Custom submit handler for our address settings. */ function MYMODULE_site_address_submit($form, FormStateInterface $form_state) { \Drupal::configFactory()->getEditable('system.site') ->set(‘address’, $form_state->getValue('site_address')) ->save(); }

This loads our site_address field from the submitted values in $form_state, and saves it to the system.site config. The exported system.site.yml file should now look something like:

name: 'My Awesome Site' mail: test@domain.com slogan: '' page: 403: '' 404: '' front: /user/login admin_compact_mode: false weight_select_max: 100 langcode: en default_langcode: en address: country_code: US langcode: '' address_line1: '123 W Elm St.' address_line2: '' locality: Denver administrative_area: CO postal_code: '80266' given_name: null additional_name: null family_name: null organization: null sorting_code: null dependent_locality: null

After that, we need to make sure our field will use the saved address as the #default_value. Back in our hook, let’s update that key with the following:

function MYMODULE_form_system_site_information_settings_alter(&$form, FormStateInterface $form_state) { // … detail and fieldset code …   // Create the address field. $form['site_location']['address']['site_address'] = [ '#type' => 'address', '#default_value' => \Drupal::config('system.site')->get('address') ?? [ 'country_code' => 'US', ], '#used_fields' => [ AddressField::ADDRESS_LINE1, AddressField::ADDRESS_LINE2, AddressField::ADMINISTRATIVE_AREA, AddressField::LOCALITY, AddressField::POSTAL_CODE, ], '#available_countries' => ['US'], ];   // … custom submit handler ... }

Using PHP 7’s null coalesce operator, we either set the default to the saved values or to a sensible fallback if nothing has been saved yet. Putting this all together, our module file should now look like this:

<?php   /** * @file * Main module file. */   use Drupal\Core\Form\FormStateInterface; use CommerceGuys\Addressing\AddressFormat\AddressField;   /** * Implements hook_form_ID_alter(). */ function MYMODULE_form_system_site_information_settings_alter(&$form, FormStateInterface $form_state) { // Create our contact information section. $form['site_location'] = [ '#type' => 'details', '#title' => t('Site Location'), '#open' => TRUE, ];   $form['site_location']['address'] = [ '#type' => 'fieldset', '#title' => t('Address'), ];   // Create the address field. $form['site_location']['address']['site_address'] = [ '#type' => 'address', '#default_value' => \Drupal::config('system.site')->get('address') ?? [ 'country_code' => 'US', ], '#used_fields' => [ AddressField::ADDRESS_LINE1, AddressField::ADDRESS_LINE2, AddressField::ADMINISTRATIVE_AREA, AddressField::LOCALITY, AddressField::POSTAL_CODE, ], '#available_countries' => ['US'], ];   // Add a custom submit handler for our new values. $form['#submit'][] = 'MYMODULE_site_address_submit'; }   /** * Custom submit handler for our address settings. */ function MYMODULE_site_address_submit($form, FormStateInterface $form_state) { \Drupal::configFactory()->getEditable('system.site') ->set(‘address’, $form_state->getValue('site_address')) ->save(); }

Lastly we should do some house cleaning in case our module gets uninstalled for any reason. In the same directory as the MYMODULE.module file, let’s add a MYMODULE.install file with the following code:

/** * Implements hook_uninstall(). */ function MYMODULE_uninstall() { // Delete the custom address config values. \Drupal::configFactory()->getEditable('system.site') ->clear(‘address’) ->save(); }

That’s it! Now we have a way to provide location information to the global site configuration. Using that data, I’ll be able to display this information elsewhere as text or as a Google Map. Being able to use the same features that Address field types have, I can leverage other modules that display address information or build my own displays, because I now have reliably structured data to work with.

Amazee Labs: Practices - Amazee Agile Agency Survey Results - Part 9

Main Drupal Feed - Tue, 01/23/2018 - 09:10
Practices - Amazee Agile Agency Survey Results - Part 9

This is part 9 of our series processing the results of the Amazee Agile Agency Survey. Previously I wrote about client interactions; this time let’s focus on practices. How often do teams deploy code? Are they practising peer reviews, automated testing, pair programming or story mapping?

Josef Dabernig Tue, 01/23/2018 - 10:10

When asked about “How often does your team deploy code?”, 53% of the teams answered they would do deployments “Rolling / Whenever necessary”. 13.3% deploy “About once a week”, another 13.3% “About every two weeks” and 6.7% answered they would deploy “Daily”. The remaining chose to go with freeform answers such as different frequencies based on the dev/stage/live environments or that it would depend on the client.

For us at Amazee, the deployment schedule depends on the needs of the clients. Thanks to the automatization that our Amazee.io hosting environment provides, any team member can execute a deployment on their own if it makes sense. Some High-availability clients require a fixed deployment schedule that our team has programmed to happen every week, besides that only critical hotfixes would be deployed instantly out of the schedule. Most of our clients allow us to deploy whenever, yet if a downtime for more complex deployments is needed we usually try to schedule them outside of business hours. For global customers that run their website across the globe, we try to find the deployment slot that fits best and rely on a proxy server like Varnish that keeps serving anonymous users during a deployment downtime.


Our second question was geared towards finding out which agile practices would be used by teams and how important they are considered. Contestants were able to rate from “Unknown”, “Not needed”, “Tried but failed” to “Somewhat in use”, “Actively in use” up to “Very important”. The practice that was mostly unknown is Mob programming.  Story mapping is also widely unknown but also has a good number of constants rating it with “Somewhat in use”. Pair programming is somewhat in use for many but also has a good number of contestants who responded “Unknown” or “Not needed”. The practices mostly rated as “Very important” where Peer reviews/code reviews as well as User testing. Automated testing got a lot of votes for “Somewhat in use”, and a few ones rated it as “Very important”. Per-ticket branch test environments have been rated as “Somewhat in use” by many as well.

For us at Amazee, we do Peer & code reviews for every work increment within our Scrum teams. This ensures code quality, knowledge transfer and feedback between team members. Automated testing happens for mission-critical features. Vasi has an article with good arguments why you should invest in it. User testing is performed on about a third of our projects. Automated deployments, continuous integration and per-ticket branch test environment are extensively used thanks again to the Amazee.io hosting environment goodies. Pair programming is quite common for our teams. While we have experimented with mob programming for teaching purposes, our team didn’t entirely pick it up. Finally, story mapping is something that we started using recently with good results, but we don’t have too much experience with it, yet.

Which practices do you use and how often do you do deployments? Please leave us a comment below. If you are interested in Agile Scrum training, don’t hesitate to contact us.

Stay tuned for the last post where we’ll do a round up of the Agile Agency Survey.

Colorfield: Install Solr 7 for Drupal 8 Search API on Ubuntu 16.04

Main Drupal Feed - Tue, 01/23/2018 - 07:51
Install Solr 7 for Drupal 8 Search API on Ubuntu 16.04 christophe Tue, 23/01/2018 - 08:51 A brief introduction to Search API Solr, an update on the ecosystem and how to get Search API 2.x working on a dev environment with multiple collections.

Drupal core announcements: Drupal 8 will require PHP 7.0 or higher starting March 6, 2019 (one year from now)

Main Drupal Feed - Tue, 01/23/2018 - 00:43

Drupal 8 will require PHP 7.0 or higher starting March 6, 2019. Drupal 8 users who are running Drupal 8 on PHP 5.5 or PHP 5.6 should begin planning to upgrade their PHP version to 7.0 or higher. Drupal 8.6 will be the final Drupal 8 version to support PHP 5, and will reach end-of-life on March 6, 2019, when Drupal 8.7.0 is released. (If 8.7.0 is released before March 6, 2019, the release number for the end-of-life will be updated accordingly, but the end-of-life date will remain the same.)

When planning for which PHP version to upgrade to, consider that PHP 7.2 was released on November 30, 2017 and will remain supported longer than older PHP 7 versions.

Why is support being dropped for PHP 5.5 and 5.6?
  • PHP 5.5 has already reached official end-of-life in 2016. Following that, a growing number of the PHP libraries used by Drupal 8 have also started to discontinue support for PHP 5.5.
  • PHP 5.6 stopped receiving active support from PHP maintainers in January 2017. This means that it is no longer receiving bugfixes, even for some very serious bugs that impact Drupal development.
  • PHP 5.6 is the final PHP 5 version, so the PHP maintainers are providing two years of security fixes for PHP 5.6 beyond its active support, through December 2018. This is a few months after Drupal 8.6's scheduled release and well before Drupal 8.7 would be released.
  • Drupal 8's automated tests require the PHPUnit library, which will drop support for PHP 5.6 in February 2018. Several other third-party dependencies are also dropping PHP 5.6 support in their latest versions.
  • To minimize disruption for both Drupal users and Drupal developers, Drupal 8's support of PHP 5.5 and PHP 5.6 will end at the same time.

We understand that upgrading from PHP 5 to PHP 7 may require time to plan and deploy. We suggest upgrading to PHP 7 in 2018 (rather than waiting for Drupal 8.7.0’s release).

What if I'm using a hosting service that doesn't offer PHP 7?

A majority of PHP hosting providers already offer PHP 7. If you're using one that doesn't, we suggest asking that provider when they will make it available, and if it's not until after March 2019, leave a comment on our tracking issue linking to that hosting provider, so that we can better understand the outliers, and perhaps offer some help.

What if I'm at an organization that maintains its own hosting, and we're using Ubuntu 14.04, which bundles PHP 5.5?

You have a few options if you are using Ubuntu 14.04:

  1. The preferred option is to plan an upgrade to Ubuntu 18.04 (to be released on April 2018, 2018). This version will be the most future-compatible.
  2. Another option is to upgrade Ubuntu 16.04, which is available now. You may need to upgrade Ubuntu again in a couple years if you choose to upgrade to 16.04 now.
  3. Finally, you can choose to upgrade to a separate build of PHP. Ondřej Surý provides a widely used PPA for doing this.
When will Drupal 8 drop support for PHP 7.0?

Support for PHP 7.0 will continue until at least March 6, 2019. We do not yet know whether Drupal 8's PHP 7.0 support will continue past that date, but we will post another announcement as soon as the end of PHP 7.0 support has been scheduled. We recommend you update to PHP 7.1 or higher since those versions will be supported longer.

How does this affect Drupal 8 core development?

Backported fixes account for about 80% of all changes and must continue to work on PHP 5.5 and 5.6 throughout Drupal 8.6.x's support cycle. For this reason, no PHP 7-only changes will be made until the 8.8.x branch is opened in early 2019 (or 8.9.x if 8.8.0 is released in 2018). Once 8.8.x is opened, the library dependencies in that branch can be updated to versions that have a PHP 7.0 requirement, and the Drupal code itself in that branch can begin relying on PHP 7 features. (Drupal 8 release cycle information)

The automated test suite already defaults to using PHPUnit 6 on environments that use PHP 7, but falls back to PHPUnit 4 on PHP 5. The fallback will be removed in the 8.8.x branch.

Does this affect Drupal 7?

No. Drupal 7 remains compatible with PHP 5.2.4 and higher. A separate announcement will be issued if and when that changes.

Palantir: What “Content” Means to Different Teams

Main Drupal Feed - Mon, 01/22/2018 - 22:21
What “Content” Means to Different Teams brandt Mon, 01/22/2018 - 16:21 Ken Rickard Jan 23, 2018

The importance of aligning editorial, marketing, design, and development.

Stay connected with the latest news on web strategy, design, and development.

Sign up for our newsletter.

As we’ve discussed before, understanding the content on your website is a critical element in the project plan. Today, we’d like to step back a bit and talk about how different teams in an organization might think about content.

First, let’s define our common teams by function:

  • The Editorial team produces and maintains content for the site.
  • The Marketing team sets strategy and metrics around successful audience engagement and interactions.
  • The UX Design team creates the strategy, visual and interactive components that comprise the site’s features.
  • The Development team builds and supports the site so that it fulfills the needs defined by the other three teams.

Note that these teams may all be organized within a single department (commonly marketing) or spread across the organization. Our concern here is not with organizational structure but rather with the perspective and concerns that are inherent in each team.

When teams start work on a new site or a site redesign, the most common mistake is for these four teams to work in silos, as if their individual tasks are unrelated to each other. In this case, a number of issues may arise:

  • A design may include elements that place extra burden on the editorial team.
  • An editorial workflow may require the development of custom code.
  • A marketing plan may ignore the limited editorial and design resources available to achieve its goals.
  • Organizations that have a history of heavily relying on non-digital media for marketing and promotions may have to figure out how to incorporate and plan for the digital work into the existing workflow.
  • A CMS implementation may not be able to produce certain essential design features, or budget and timeline prevents features from being designed a certain way.

Working together, teams can work through these types of issues before they become problems. To do so, it’s vital to get everyone speaking the same language around your content. We like to look at five specific factors when helping teams define their content strategy:

  • Audience defines the users and their needs and answers “who is this for?”
  • Purpose asks the question “what end result are we hoping to achieve?”
  • Workflow deals with the mechanics of content production, approval, publication, and presentation.
  • Transformation explores issues of translation and personalization, so that we define how the content might be modified in distinct contexts.
  • Structure defines the input and storage of the content and how it will be delivered to various publication media. The structure is directly affected by the needs outlined by the three previous items.

Each of these elements has a direct effect on each of our project teams. To understand how, Let’s take a look at Dr. Gillinov’s bio page at Cleveland Clinic to see how these questions bring focus to our project goals.

 

There are many elements that make up this comprehensive profile page and they all require each team member mentioned above to consider the following:

  1. Where does the data/content come from?
  2. What pieces of data/content is the editor responsible for?
  3. What does this page look like if it has all of the possible content types vs. physicians who have very little information?

For the purposes of this discussion, however, let’s focus on the top portion of the page addressing the data/content that makes up Dr. Gillinov’s basic information as it will help us illustrate our points.The first thing we look for here is the number of elements within the design pattern and how they might be produced. At first count, there are 11:

Let’s see how those elements break down.

  1. Picture – an uploaded image of the person.
  2. Video Link – a link to an external video service
  3. Rating – 1-5 stars based on patient feedback
  4. Rating Count – the number of patient ratings
  5. Comment Count – the number of patient comments
  6. Name – the name and honorifics for this person
  7. Department – the assigned internal department
  8. Primary Location – the main office location for this person
  9. Type of Doctor – indicates pediatrician, adult physician, or both
  10. Languages – a list of languages spoken
  11. Surgeon – indicates that this person is a licensed surgeon
Audience

There are multiple types of users that would view this page: potential patients, existing patients, families of patients, and medical professionals. Their needs are different based on who they are and where they are in their care journey.

Purpose

The primary purpose of this specific component is to provide basic information to the audience. The information presented helps them understand the services and availability of this doctor. The use of a picture and a video are designed to build trust by establishing a human connection in addition to the facts presented.

The inclusion of patient ratings serves as an impartial arbiter of the quality of services provided, while the department and location information helps people understand where they can go to receive treatment.

Workflow

For this example, the important question is “Which part of this page is editorial and which part is automated?” Here, the ratings pull in from a secondary system, which the editors do not control. The video is merely a link reference, but is editorial data. And while some of the doctor information might be pulled from an external system, here we assume that it can be edited for display on the web.

There is also an unlisted assumption here – call it feature #12 – about whether or not this doctor has active privileges at the hospital. Our editorial workflow needs to account for when an individual physician changes jobs, retires, or moves away.

Transformations

We use the term “transformations” here as a bit of a catch-all to describe how the data might need to change in different contexts. A common context shift is language.

When considering a multilingual website, we need to evaluate each element of the page for the desirability and feasibility of its translation.

Take the Video field for instance: Translating the link text for a video is trivial, but does the video itself need to be recorded in multiple languages (or at least subtitled)? Does it make sense to show a Spanish translation of the video link if the video is only in English?

The other most common transformation is personalization, wherein content elements are transformed based on our understanding of who the reader is and what they care about.

The key factor to consider about personalization is that it can create exponentially more work for the editorial team. Consider that for each element that desires personalization, we must create one new version for each variation. Let’s say that we want to segment our audience experience by three data points:

  • Returning patient (yes / no)
  • Local resident (yes / no)
  • Age cohort (child / adult / senior)

Now our one piece of content needs 2 x 2 x 3 = 12 variants, plus the original. For clarity, here’s how that looks mapped out: 

If we add in cases where one of the answers is not known, then the math becomes 3 x 3 x 4 = 36 plus the original variant.

As you can imagine, keeping track of those options can become a heavy editorial burden quite quickly if we were to personalize multiple elements on a page.

Structure

The above questions help inform how this page is structured on the back end. Additionally, we have to consider:

  • What fields do we need to capture and report this data?
  • What format should the data be displayed in?
  • What services (other than the website) might consume this data?
  • In what other contexts might this data be shown?

This last question gives an easy example of the type of decision that your programmers may need to make. To fully understand, let’s look for a minute at the contexts of a search result.

Here, the results are alphabetized by the physician’s last name. If we were to enter the physician’s name as it appears in English, “A. Mark Gillinov, MD”, a computer cannot natively sort by last name. We should also consider whether the honorific “MD” should influence sort order, and whether to sort by first and last name in the case of multiple matches to a common surname.

That generally leads to a separation of the sort field into a 14th field concept: Sort name. In our example the sort name is likely to be “Gillinov Mark A.” The remaining question is whether editors should provide that detail or if it should be automatically inferred by a custom element in the CMS.

Additionally, look at the elements that contain links:

  • Video
  • Ratings
  • Department
  • Primary Location

The target of these links needs to be captured, and the logic for that link generation accounted for in the CMS architecture. Further, can these elements be automatically derived from existing data (like the doctor’s name) or are they “hidden” metadata points that need to be added?

In most cases, the mapping for these elements is based on metadata:

  • Video – requires a unique URL for a YouTube video.
  • Ratings – requires a physician ID number provided by the ratings service.
  • Department –  selected from a list of Department pages controlled by the CMS.
  • Primary Location – selected from a list of Location pages controlled by the CMS and containing mapping metadata.

And to add one more element to the structure question: Which of these page elements allow for multiple selection? Can a doctor be part of two departments? Have three primary locations?

Making the Complex Simple

These kinds of workflow complexities in your data are absolutely essential to capture as early in the design process as possible. What if we find that “Languages spoken” is very important to patients, but not currently available in our information set? That requires additional editorial work – and likely a staff-wide survey – that could take weeks to complete simply due to the coordination involved. It is also worth mentioning the impact on initial design choices as well. For example, do we need to consider fonts that have text alternates for language glyphs? Does the design still hold up (spacing, line length, relationship to imagery etc) when there is twice as much French text as English?

Since we’re working directly with Marketing to define our audience and purpose of each page, we should understand how each element of the design improves the overall user experience. That knowledge allows the entire team to make informed decisions about the level of effort to produce and maintain each content element.

All members of the team should have a familiarity and respect for the concerns of other members of the team. When developing and planning content, it is imperative to involve all four teams as early in the process as possible. To bring your content into focus, always ask the following questions about any design or content element shown in a wireframe or mockup:

  • What content or data will be needed to produce this element?
  • Does this content or data already exist in a usable format?
  • What format will this data be entered and stored in?
  • Will this element be editorially curated or automatically produced?
    • If automated, do we have business logic to support that automation?
    • If curated, do we have the staff time to support that creation and maintenance?

Building a robust content model and workflow is a team effort. The functionality of the CMS and the designs they are capable of producing is what brings the Editorial, Marketing, Digital and IT teams together. Giving them the visibility into each other's work streams allows them to collaborate. This collaboration also gives the various team members collective ownership over the content experiences within their organizations.

We want to make your project a success.

Let's Chat.

Pages