This year we are introducing a small registration fee to cover the additional costs of a 2-day conference. In return, we'll deliver the best Scalar so far!
In this talk we will compare three techniques for meta-programming in Scala: macros, shapeless, and code generation. Through a sequence of simple examples we will attempt to characterise the relative pros and cons of each each technique, where they become appropriate, and when they might turn around and bite you.
We can solve many Scala programming problems using simple tools: algebraic data types, higher order functions, and type classes. Sometimes, however, the code becomes verbose or unwieldy, and we search for ways to make our code cleaner and more maintainable. “Meta-programming” is a broad term describing techniques for generating code using code, but the meta programming techniques listed above could not be more different. Sometimes, being able to identify the correct technique may save hours of frustration attempting to go down blind alleys. This is the problem we are trying to solve in this talk.
The talk is aimed at intermediate Scala developers who have a basic awareness of each technique. You don’t need to know shapeless or macro programming to benefit.
Typeclasses are widely in use on advanced Scala code and yet they are not a first class citizen of the language. This talk will introduce the concept of typeclasses from the ground up by looking at languages such as Haskell and Rust.
Typeclass coherence (multiple instances of the same typeclass for the same type), effective namespacing and implicit resolution (in Scala) are some of the challenges that arise for extensive usage of this concept and language feature. We'll explore those issues and how to apply the learnings from other languages to improve the readability and usability of typeclasses in Scala by defining conventions (inspired by the Cats library) and relying on tools (e.g. Simulacrum).
Let's face it: You may be able to avoid using sbt to build your Scala projects, but you probably don't want to. It's the de-facto standard for Scala projects and is programmed and configured in Scala itself.
We'll develop a simple sbt plugin together and explore some of the concepts of an sbt build's architecture. The goal is to help you better automate your builds in a single environment rather than stringing together scripts and tools.
Quark is a new Scala DSL for data processing and analytics that runs on top of the Quasar Analytics compiler. Quark is adept at processing semi-structured data and compiles query plans to operations that run entirely inside a target data source. In this presentation, John A. De Goes provides an overview of the open source library, showing several use cases in data processing and analytics. John also demonstrates a powerful technique that every developer can use to create their own purely-functional, type-safe DSLs in the Scala programming language.
"Monad Transformers. ""WAT?"" ""Exactly""
In this session we'll see what monad transformers are, where their need comes from and how to use them effectively
We'll walk through this rather complicated topic guided by real-life examples, with the noble intent of making our code more readable, maintainable and pleasant to work with.
This talk contains slides that some viewers may find disturbing, most of them containing words like ""monad"" and/or ""functors""
Listener discretion advised"
Did you know that Akka Cluster contains tools for distributing your data making conscious decisions about the level of required consistency guarantees? Let's make a deep dive together into the world of Distributed Data with akka-distributed-data and its CRDTs.
The aim of this workshop is to get familiar with optics in Scala. It will cover basic functional optics using Monocle library.
Stream processing has become ubiquitous and Akka Streams, an implementation of the Reactive Streams specification, is one of the hottest kids on the block. In this talk we cover the essentials you have to know to get started. As we strongly believe in writing code as the best way to learn coding, we show zero slides, but a lot of live code and live demos instead.
Akka is a toolkit that brings the actor model to the JVM and helps developers to build scalable, resilient and responsive applications. With location transparency and asynchronous message passing it is designed to work distributed from the ground up.
While distributed systems help to solve some problems like availability, a new set of problems arise. For example how do we scale the cluster up or down? What happens if the network is at least partially not available? Akka provides a comprehensive set of cluster features that enable developers to monitor and manage the cluster manually or in most cases even automatically.
In this talk I will introduce some of these features and explain what you need to be aware of. You will learn how to start a cluster correctly and add / (gracefully) remove nodes to / from a running cluster. Additionally I will show how to handle failure scenarios like network partitions by using an existing or implementing a custom split brain resolver.
The google dapper paper laid the groundwork for distributed tracing systems. I will show how I implemented these ideas using htrace and zipkin and show a live demo of the library I wrote for easy integration into your akka streams & http applications.
Detecting anomalies is a quite hot topic right now. There are many algorithms to do so, yet they are difficult to reason about and math beyond them is complex. I will present simple statistical soultions that you can understand without phd in maths. I will show you how we did anomaly detection @Allegro in scale. I will talk about how to measure anomalies and how to apply proper machine learning in the streaming way. Last but not least I will present sample demo as a code.
A presentation planned as food for thought: How we can use Akka for something totally different, achieve practical results, explore interesting features, as usage of time gaps between signal as a source of information, and still not to work our socks off.
In 30 minutes I would like to show:
1. Why is it worth to spend some time and learn Gatling - a tool for integration/performance test of your web application?
2. Under what circumstances it is necessary to have Gatling in your toolbox?
3. What are Gatling cons and what kind of problems can you expect?
For sure there is no silver bullet in testing tools area, but you will definitely love Gatling DSL.
Scala might be a gateway drug to Haskell and a sizeable part of the community is trying to adopt and improve upon existing Haskell practices in Scala code. But Scala isn't Haskell and was never meant to be, instead it's more of an improvement over the ML family of languages than anything else.
One salient feature of languages in the ML family is the module system — a tool for specifying and grouping collections of types and functions — which helps with programming in the large. Scala inherits this feature too, just under a different name: objects.
During this talk we'll explore the ML module system — its features and limitations — and Scala's take on it.
ScalaCheck is a well-known library for property-base testing. However, property-base testing is not always possible when side effects are involved, for example when writing an integration test that involves data being stored in a database. When writing non-property-base tests, we often need to initialise some data and then verify some assertions on it. However, manual data generation can make our data biased and preventing us from spotting bugs in our code. Having our data generated randomly not only it would make our test less biased, but it will also make it a lot more readable by highlighting what part of our data are actually relevant in our test.
In this talk we will discuss how to reuse some of the existing ScalaCheck code to generate random instances of given types and how these can be combined to generate random case classes. We will analyse the properties of a ScalaCheck generator and provide examples of how we can manipulate existing generators to meet our needs.
For a lot of people, type-level programming is a fascinating topic. It makes your brain work harder, opens new interesting possibilities where you did not expect them. One question that comes up more and more frequently though is – "so where and how exactly do I use this in my day job as a scala programmer?". In this talk, we will explore one such use case: generating json serializers/deserializers at compile-time, with the help of shapeless library.
Are the machines learning on their own? Wait, is Skynet already here? During this session we will tackle an easy Machine Learning problem, show how can it be processed on Spark including data cleaning, normalization and a learning process. Workshop-coding session, but only if the machines don’t rise against us.
The radixtree library provides a generic data structure for the typelevel ecosystem. Using this library as an example, I am going to show how to write data structure libraries that work well with typelevel typeclasses such as the ones from the cats library.
Scala as a hybrid language does not impose purity on us and the responsibility of keeping rigor lays on the programmer. In this talk, we’re going to explore how to build a special type that allows controlling when side-effects happen. We'll start with a simple naive implementation, make it stack-safe, and on top add support for asynchronous processing. We'll also compare it with what the open source has to offer in that regard and explain why some of the implementations just don't cut it. The listener needs to understand Scala syntax to make the most of the presentation.
MTL stands for Monads Transformers Library, but it has barely little in common with monad transformers. Well that's confusing, right? To be honest, in the ancient times, in some long forgotten, prehistoric version of Haskell, MTL was a library holding monad transformers - but those times are long forgoten.
In a nutshell, MTL provides set of abstractions (defined as typeclasses) over many useful patterns that can be applied to types. This allows to write a lot more generic & maintainable code, with noticeably less amount of boilerplatte. MTL brings a a breath of fresh air to pure FP projects that heavly rely on effectful computations closed over monads.
Intrigued? Confused? Or maybe both? You should see my talk, it will be fun & instructive.
This workshop will introduce shapeless, a library for generic programming in Scala. We will discover how to use shapeless to automatically create instances of type classes ("JSON encoder", "Equals", "Show", and so on) for any algebraic data type (case class or sealed trait) using a very small kernel of library code.
The workshop is aimed at established Scala developers who haven't yet got to grips with shapeless. It will use material from Dave's open source ebook "The Type Astronaut's Guide to Shapeless", which you can download for free fromthe Underscore web siteor fromGithub. The book is not required reading for the workshop, but it is a useful resource if you want to get into this fascinating and powerful style of Scala programming.
You will need: a laptop, a copy of Scala, and a clone of theexercises repo. Setup instructions are in the README on Github. Please grab these before the workshop if you can, to avoid unexpected last minute technical problems.
Spark 2.0 comes with a new, powerful feature - Continous Applications. Unifying the broad choice of inputs and data virtualization with streaming, it enables streaming structured and semi-structured data as DataFrames or Datasets. This, along with the recent developmenets in Spark ML and the introduction of GraphFrames, gives us new, exciting possibilities like building on-line and time-windowed machine learning pipelines and running analysis on graphs updated in real-time. During the presentation, I will present the broad possibilities of Continous Applications on an end-to-end example, from obtaining data to running it through a machine learning pipeline.
Apache Spark has become the de-facto standard for writing big data processing pipelines. While the business logic of Spark applications is often at least as complex as what we have been dealing with in a pre-big data world, enabling developers to write comprehensive, fast unit test suites has not been a priority in the design of Spark. The main problem is that you cannot test your code without at least running a local SparkContext. These tests are not really unit tests, and they are too slow for pursuing a test-driven development approach.
In this talk, I will introduce thekontextfrei library, which aims to liberate you from the chains of the SparkContext. I will show how it helps restoring the fast feedback loop we are taking for granted. In addition, I will explain how kontextfrei is implemented and discuss some of the design decisions made and look at alternative approaches and current limitations.
The talk will be based on a puzzle that should be simple even for beginners: extend Map[(Int, Int), T] with method row(r: Int) -> Iterable[T].
Turns out it's not. In Scala it's tricky in at last 3 places: understanding MapLike, GenLike, CanBuildFrom and type bounds.
At the end I'll introduce some concepts of the new collections planned for Scala 2.13.
In barely half a year, my team of four launched a product that lets Artsy users bid in global sales hosted by our auction house partners in real-time. One of the core pieces of this system is a bidding engine, written in Scala and the Akka Framework. We created small open source library called Atomic Store, which builds upon Akka Persistence, to process bids in the event sourcing paradigm. This talk discusses how all the pieces fit together, in a use case in which consistency trumps availability.
It's often said in the CQRS community that frameworks are not needed because the basic operations are quite trivial. This is true, as we are going to see during the talk. However, it's not that trivial to deal with failure, asynchronicity, concurrency, IO, etc.
It turns out that functional programming offers many constructs that can help us to deal with all those aspects while staying pure and principled.
We are convinced that a solid functional foundation for CQRS/ES can lay the path for better abstractions and more expressive modelling.
To help themselves deal with complexity, programmers have a long tradition of using metaphors that map the abstract concepts they wield onto concrete object and activities of the everyday life.
Some of these metaphors are so powerful that they gave birth to entire programming cultures — like Object Oriented Programming — or programming techniques — like the Actor Model.
However, an overused metaphor can end up hiding the reality it was supposed to help grasp in the first place ; it becomes an illusion.
In this talk, I try to describe how the construction-related metaphors — the “carpenter’s mindset” — can mislead us into believing that programming is a creative activity, and why this can lead to unwanted effects. In contrast, I’ll introduce the “cartographer's mindset”, the idea that programming is first and foremost a matter of discovering abstractions, and show how we can make both the carpenter and the cartographer to work together and produce better software.
You all use lambda expressions. But what does it mean... Lambda? The talk will be about story behind this term, from a little bit more scientific point of view. Church Lambda calculus, Entscheidungsproblem and incompleteness theorem. Almost all of that will be presented in Scala. And of course you can later impress your friends with some impressive math tricks. There are going to be shown some very crazy code pieces such as perfectly unusable implementation of boolean (based on lambda expressions). Come and see what purely functional really means.
Dave is a developer, trainer, and partner at Underscore. He has spent over a decade creating software using functional programming, and has authored and co-authored several books about Scala. His current projects include taking short trips in his space ship and creating synthesizers using cats.
George is an engineer at SoundCloud where he's taking care of highload systems that power user feeds, notifications, and social interaction graph. In his spare time he likes exploring the world of type-level programming and does distance running.
Piotr Guzik - Big Data Engineer @Allegro and Consultant @GetInData. I am using Spark Streaming, Kafka and Scala during my daily work. From time to time I am a speaker at conferences and workshops. In my free time enjoy travelling around the world with my fiancee and camera.
Technical Architect at GFT Poland and Lecturer at the University of Łódź by day, sporadic akka project contributor by night. Interested in how software works on low level, he does not find big frameworks appealing. This is the main reason why he loves Scala, a very expressive language that allows one to build right abstractions quickly without losing control over details. Jan is an active member of JUG Łódź and occasional conference speaker. Currently he is busy with a Big Data project for one of major investment banks. In his spare time he loves to dust-off some old issue of computer science journal only to find out that everything has already been invented before he even was born.
John is an author, speaker, entrepreneur, and long-time software architect and engineer. Loves startups, technology, science, software engineering, fitness, and my family. In addition to my consulting business, he's currently CTO at SlamData, a company building cool open source software for NoSQL analytics.
Renato Cavalcanti is an independent Scala developer based in Belgium. Scala aficionado since 2009, he has been hacking in Scala and related technologies for fun and profit. He's the founder of BeScala, co-founder of Strong[Typed] and author of Fun.CQRS.
Maciek is a Scala and Android developer at Wire in Berlin. AI enthusiast, interested in practical applications of artificial neural networks, game theory, Scrum, riding bicycles around the world, and how to stay sane while pursuing a bunch of other interests. He advocates healthy minimalism and keeping one's code clean and simple.
After many years using Java (and loving it), I discovered Scala, at DevoxxFR in 2012. That was a blast. Since then, I became rather obsessed with that language and sought every opportunity to learn more about it, and about FP in general. I've learned what a monad is, and then I learned not to worry about what a monad is. I've learned about higher-kinded types and all sort of bizarre things that only people who knew SI-2712 by its number usually care about. Now, I thrive to share my enthusiasm about Scala with more and more people, and to make more friends by not talking too much about monads. Also, I wear suits.
Daniel Westheide is a senior consultant at innoQ Deutschland GmbH. He is particularly interested in functional programming and distributed systems and published the blog series and e-book “The Neophyte’s Guide to Scala”.
Andrea is a system researcher and eclectic software engineer with a passion for distributed systems, programming language design, and well crafted software; in his consulting work he leveraged Scala to develop stable platforms that also enable rapid iteration. He’s currently a Research Assistant and PhD Candidate at the Systems Group, Department of Computer Science, ETH Zürich, Switzerland.
Niko Will is a consultant at innoQ and develops software for the JVM. He focuses on designing and implementing highly scalable, distributed software systems. Recently he is dedicated to functional and reactive technologies like Scala, Play and Akka as well as event driven architectures.
Ionuț is a software developer at Eloquentix, where he works on backend services using Scala. His current interests revolve around functional programming techniques, programming languages and compilers.
I've done a lot of different programming from low-level kernel modules written in C/C++ through telecommunication system written in Java EE. Now I settled on using Scala which is my language of choice since 2011. I'm fascinated by new ways of delivering software brought by the functional paradigm. At my day job I work in the research department of Adform as Software/Data Engineer. In my team we’re trying to build the best forecasting service there is on the market for publishers.
Engineering Lead for Auctions at Artsy. We build systems for helping people buy and sell art at auctions houses we partner with. I’ve been developing software for 20 years, ever since learning Z80 assembly to program my TI-83 at age 12. I specialize in product engineering and systems architecture. I love working in Scala because I think it’s on the cutting edge of letting people write code that is performant, safe, and beautiful, all at once.
Miller at SoftwareMill. A devotee of DDD, Event Sourcing and Polyglot Persistence. Continuously chasing the dream of a perfect software architecture, fulfilling all of the requirements and trends, even the strangest ones.
Scala / Big Data Developer. Currently @ IMS Health. Specializes in Big Data projects involving Data Warehousing & Machine Learning. Active member of JUG Łódź. Professional motto: "Data is always more valuable that you recognize it to be".
Java Developer since 1999. I love programming since my first line of code I did on the C64 in Basic. I have 15 Years experience developing JEE software working for various companies and projects. Currently I am working for CSS Versicherung in Luzern. I am Java developer during the days and Scala/ScalaJS at nights. I like to present my experience in public – so far speaked at conferences such as Devoxx, Voxxed Days, JUGs, Geecon, JdayLviv etc. I like to do shows - presenting the things just as they work – with live coding, demos and hacking.
Data Scientist @ Growbots, OS contributor, polyglot programmer biased towards Scala. Fan of distributed systems and graphs. Working on everything from Spark ETLs to ML models to data visualization.
Through his career Michał worked with C, Java, forgotten lands of Java EE, Spring, Scala and Big data. He committed a crime of writing a Java EE book, which may hunt him for the rest of his life. He is an open source contributor and a winner of the JBoss Community Recognition Award in 2013 for his contributions to ShrinkWrap. He is currently one of the 40 CEOs at SoftwareMill, a fully distributed company with no main office and a completely flat organization structure. He presented on GeeCON Kraków & Prague, Devoxx Poland, Confitura and other events.
Justin always seems to end up tweaking the tooling rather than doing his actual job. Unsurprisingly, he ended up working at JetBrains, improving the sbt support for the IntelliJ Scala plugin.