00:00:22.400
um i'm here today to talk to you about actors and threats
00:00:28.000
and hopefully a better story
00:00:36.800
can't hear me very well
00:00:55.039
so um one i write a bunch of gyms
00:01:01.680
um dolly no cash clients rock climber pool connection pool
00:01:07.439
um i think of myself as a hobbyist researcher and i
00:01:13.200
tend to focus on scalability performance
00:01:18.960
now it's too loud isn't it
00:01:34.720
i did a lot of work with fighters in the event machine a year or two ago so you might have heard money in there
00:01:40.240
day-to-day it adults as a technical leader but which is a web development shop in san
00:01:46.640
francisco and l.a
00:01:51.759
so a quick ruby history i got into rooney for rails
00:01:57.200
and i love ruby because it is a fun language to program in and i think most of you probably would
00:02:02.880
agree with that statement i did not get into rooming because its implementation is awesome
00:02:10.479
ruby has some serious performance and scalability issues behind it
00:02:18.319
what this talk is not it's not a shootout depth match between various
00:02:24.319
concurrency options these are all tools in your toolkit and knowing them helps you understand
00:02:30.239
when to use what threads also have the stigma because they're sort of associated with java
00:02:37.120
but using threads doesn't mean that you're going to be a java programmer
00:02:42.160
a thread is just a tool to solve a problem
00:02:51.120
two definitions i want to make sure everyone understands the difference between concurrency and parallelism concurrency is just being able to do two
00:02:57.760
operations at the same time or in tandem
00:03:02.800
parallelism actually requires two cpu cores because you're doing two things exactly at the same time
00:03:09.120
so as you can see there that i've got the blue and the red are two different operations in both cases they're
00:03:14.720
happening concurrently building the bottom case they actually truly be done in parallel
00:03:23.280
your operating system gives you only two primitives for concurrency processing and threads that's it
00:03:31.840
else that you may have used is just layered on top of those
00:03:39.920
and by themselves they're very easy to use it's very easy to spin up in processes to perform in different
00:03:46.000
jobs it's very easy to spin up 10 threads to do 10 different things at the same time
00:03:52.720
the problem is this is when you want to communicate further when you want to coordinate between those those independent units of
00:03:59.439
execution we run into problems very very quickly
00:04:08.720
there are two ways to communicate between threads and processes choose fundamental not misses and that's
00:04:14.400
sharing data and copy and cut
00:04:19.519
and when you break it down it kind of looks like this this is a general very general rule of
00:04:26.000
thumb but if you share it out between threads it's going to be more efficient than popping
00:04:31.120
data between processes so since our scalability and performance
00:04:38.479
since since that's all about efficiency in general we're trying to protrude for the most efficient project
00:04:47.680
your operating system probably follows posix standard
00:04:53.520
in 1998 defined various mechanisms for doing ipc which is inner process communication
00:05:01.360
four pretty well known mechanisms are pipes and sockets shared memory files
00:05:10.720
did you know that breaks down into the graph like this types and sockets copy data between processes
00:05:16.960
shared memory and files share that data between processes
00:05:25.680
so threads i contend are much more efficient with processes especially in the readme world
00:05:31.120
but postposix never defines any mechanism for um
00:05:39.039
for communicating between threads aside from just variables
00:05:45.840
unfortunately and then this is how it breaks down obviously we've done that fourth
00:05:50.880
quadrant well unfortunately that means we have to use locks
00:05:56.880
and nobody likes locks really i hope nobody likes locks and you like blocks coming up
00:06:06.479
um and so let's let's go back to um remember what i said about ruby
00:06:12.160
runtime efficiency isn't the only goal in the river community i think
00:06:17.280
if you wanted to run time efficiency you would have done the twitter thing and gone to the jvm or c or something like that a long time ago
00:06:24.560
we're in ruby because it's a fun language and we don't want to lose that fun we don't want to lose that means of development
00:06:31.680
and locks make development horribly orbiting fun
00:06:38.639
i i assume most people here have used locks before but they're very hard to get right
00:06:44.080
they're non-deterministic because you simply don't know when your contact switch between
00:06:50.160
processes and threads well they don't scale they through bottlenecks at
00:06:55.520
points of contention that they become obliged from your process
00:07:03.280
this is a benchmark that i would have posted to show that contention in laws
00:07:08.479
in the first block of code we have a single thread which is just incrementing counters 10 million times
00:07:14.240
grabbing walking each time in the second walk of food i'm using five threads and doing the same thing
00:07:21.440
basically each thread incrementing that counter two million times the results are kind of interesting
00:07:27.840
in ruby one nine there's absolutely no difference in one time it takes four seconds in both cases
00:07:35.360
and that's because ruby one nine has this guilt only one thread can run at a time and so there's no breaking tension
00:07:45.520
because it has truly parallel threads you get massive loft contention so
00:07:50.960
using five threads to increment that counter causes a massive slowdown as the threats contend for that mutex
00:08:03.199
so this is the most important slide in my talk
00:08:09.840
i was talking to adam heath yesterday and he said if that fourth quadrant was a movie it
00:08:15.360
would start nicholas gage and involve a conspiracy at the highest levels of our government
00:08:26.560
um that's the quadrant that the posix standard doesn't want you to know about
00:08:32.880
um and and then yeah no mainstream operating system or a mainstream language it really does integrate that
00:08:39.440
quadrant and i think that's a shame because threads are much more efficient than processes talking data isn't bad
00:08:45.680
and talking data means that you don't have to use locks and so by focusing on that quadrant i
00:08:50.880
think we can uncover a lot of tools that really make development a lot easier and a lot more
00:08:56.080
skilled actually i'm not sure the right understatement
00:09:01.360
windows has thread local storage and i'd have to go back and check the api because it's been a few years
00:09:13.440
means that you can access that storage from that thread only you can't i don't know that you can use
00:09:19.200
it as a communication mechanism
00:09:33.200
yes it's a general statement let's just please with that so what can we do
00:09:38.480
to learn about that program go outside the mainstream there are several languages that
00:09:44.480
have offerings in that area that are very interesting and which can help us decide maybe a
00:09:51.200
better course for me so go is a great language from
00:09:57.200
google and it's got this thing called govern teams in it go routines are really cool i really
00:10:02.800
like them they're just anonymous functions uh you spin them up and you can't get a
00:10:08.079
handle to it it's not like a thread where you knew it and you've got it handled to it in a local variable once
00:10:13.440
you spin it off it's going the only way you can communicate with it is through a channel
00:10:18.880
so i've got a function here which shows a very very simple example of using a go routine to perform a calculation and to
00:10:25.440
pipe the result back to the channel so you can see on line two we're creating the channel on line three we
00:10:31.760
use the go keyword to spin off that function and that runs in a separate thread now
00:10:37.920
meanwhile the main thread on line nine blocks on that channel waiting for a message to come back
00:10:44.560
and that message is just going to be that calculation that's performed on length 5 that the value of x
00:10:50.320
and then once once that value is calculated and put on the channel the main thread wakes back up and prints it out
00:10:57.200
very it's a very simple model i really but i really like it i think it matches really well
00:11:03.279
onto a network model where i can't get a handle to a database but i can open a
00:11:08.800
socket to that database so to me this kind of feels like a socket where you're pushing messages onto your
00:11:14.800
channel or you're pushing messages onto the socket and go the runtime actually uses a pool of
00:11:21.120
threads so go routines actually can migrate between the threads in that tool
00:11:27.839
that your go routine can be executed by any any thread
00:11:35.920
actors is in the title of talk so they're surprised that i was going to talk about them
00:11:41.760
but erlang and scala use actors as their the main
00:11:47.120
their main concurrency primitive the idea behind an actor is that
00:11:54.240
this is something that just execute these ingredients like it has a mailbox and you send in messages to the process
00:12:01.519
so in this case i've got a very simple example where i've got a handle to an actor that i've created and i'm just
00:12:06.560
pushing a hash onto it as a message and it will do something with that message
00:12:14.639
actors can be threaded fiberbacks you don't really know you don't really care you shouldn't care
00:12:20.880
because again those are low-level primitives that an actor is a higher level construct
00:12:26.160
um and because actors because you're again you're copying data to that actor
00:12:32.800
there's no walking going on here that's his copy he gets a copy of the message and you mutate that message after you
00:12:39.200
send it to him he doesn't know that because he has his own copy of that message
00:12:47.040
so i want to take a look at some actor projects in ruby just to quickly sort of
00:12:52.800
cover the scene and maybe give you a couple ideas of what's out there already
00:12:58.480
probably the most mainstream actor api out there is actor rd it ships with ravinius
00:13:05.279
and it does work on jv and mri you just have to um
00:13:10.320
you have to get the gym there's another they actually ruminus publishes a gem
00:13:15.680
just this code in it that you can use on mri review
00:13:21.040
this is an example of using the api this is
00:13:26.800
you know a very trivial example so i apologize but it's your typical ping pong incrementing a counter
00:13:33.600
on line 11 and 18 we're creating the two actors responding to actors
00:13:39.120
and they're just going to loom uh receiving a message and then they're treating that message as an integer and
00:13:45.120
they're just incrementing that message back and forth between them so on line 25 the main thread kicks off
00:13:51.839
the ping pong process with zero and that just then ping on the palms
00:13:57.519
back and forth between the two actors until it reaches 100 000. at which time it just prints out
00:14:03.519
the muscle and acids a very simple api in principle um
00:14:10.560
i found it very hard to use though it's not idiomatic written at all very procedural and i think it's heavily
00:14:16.480
based on the burlingame and scala apis so it just doesn't feel like you're in units it's kind of it was hard
00:14:22.320
for me as it is to understand and use i actually had to have the author explain to me how to use
00:14:28.720
it but there's a lot better but there's a much better project cellulite
00:14:35.199
tony archery is the author of celluloid um he's i think well known in the rigid
00:14:40.959
community for his rev actor in raya projects redbacker was his first take his v1 um
00:14:47.600
of an xperia pi for ruby and celluloid is me too and
00:14:53.360
it's a really nice api it's an object-oriented actor so it's very idiomatic 3d
00:14:58.560
and it provides basically asynchronous method of location on objects his implementation uses a mixture of
00:15:05.040
threads and fibers under the covers this is that tangent palm example but done in
00:15:12.880
cellulite you can see here we're actually defining two different classes and we're
00:15:18.560
including the celluloid module um by including the cellulite module that
00:15:24.160
turns this class this this object into an after instance
00:15:31.040
so down in 24 and 25 our main thread is actually creating those two actors
00:15:37.519
and then kicking off the process on line 27.
00:15:42.639
note on line 28 it then blocks the matrix block so waiting for the actors to finish
00:15:49.199
that's a nice feature of cellulite where you can actually get some
00:15:54.959
some synchronization between between the reactors but basically they're just passing again
00:16:00.880
they're just passing the message back and forth until they reach 100 000 at which point the dumb
00:16:06.480
symbol is signal on line 15.
00:16:11.839
so i really liked lightweight it wasn't obvious already it's very idiomatic really
00:16:16.880
one limitation of it but i think is needs to be rectified probably is the
00:16:22.880
fact that every instance that you create is backed by a real thread which means
00:16:28.560
uh in a roomy threads are a minimum of half a megabyte of memory so every object you create
00:16:34.839
basically takes half a megabyte of memory um
00:16:39.920
which is very heavy weight needless to say so the cellular use accuracy
00:16:46.639
would be nice to have cooling so you could be using instances
00:16:52.079
girl friday is the third project that i was talking about this is my project it is based on actor rv so it's layered
00:16:59.519
on top of hacker rv but it's it provides background processing tools where you
00:17:04.880
have a supervisor and then pool of workers behind that supervisor
00:17:11.120
it also provides parallel patch operations so you can give it a large amount of work and it will give
00:17:17.600
that work to the workers and then you can block waiting for the entire batch to be completed
00:17:24.079
its design is more functional than object oriented i'll show you what that means here in a second and it's solely thread based
00:17:34.960
so this is a simple example of using girl friday i designed real friday to be just very simple background processing
00:17:41.760
for rails applications the idea being that in this example
00:17:46.799
in a rails application quite frequently you want to send an email the user registers or signs up for your your website
00:17:54.400
and but you don't want to do that on the main thread that's processing for the request from that user right
00:18:01.280
you don't want the block on that network so you want to do the email delivery asynchronously that's exactly what this
00:18:06.799
is designed to do so in this case we're creating a work queue with three workers
00:18:13.200
behind the scenes and all those workers do is process that block that's
00:18:18.320
effectively line six that's what what i say when it's more functional than object oriented we're
00:18:24.160
not creating any classes here we're just passing blocks of work around and on line 10 you would have something
00:18:30.640
like you had an active record model where after we save this object to the database we want to send the email
00:18:37.440
but instead of actually doing the delivery on that thread we create the email and push it onto the queue
00:18:43.440
and then once that message goes to the worker after the work raptor actually calls deliver
00:18:50.000
on it so this is an example of the batch
00:18:56.720
processing that grill friday provides in our case we've got a couple of urls
00:19:02.559
that we want to scrape but we don't want to do that in serial like we want to do that in parallel because that's network i o
00:19:08.480
so we create a batch with three workers and pass it the url's a process
00:19:15.280
each of those workers will will execute that block for each of those urls
00:19:21.200
and and then what it returns in this case the count of the spans in the document
00:19:26.480
that will be the result for that entry and so when we call batch dot results
00:19:32.160
that actually blocks waiting for the workers to complete waiting for queue to be
00:19:41.679
so that's a brief overview of factors in ruby
00:19:47.520
there are some times where we do need to share data and there are some solutions there that
00:19:52.640
are a little off the beaten track and aren't really mainstream yet that
00:19:57.919
would be nice to talk about as i said earlier this is what the posix standard sort of
00:20:03.360
defines in the 80s life has moved on in fact we have
00:20:09.679
two other possibilities that i'm aware of and you guys know more um
00:20:15.520
we covered actors in that lower quadrant but if we want to share data between threads
00:20:21.039
we have software transactional memory software transactional memory
00:20:28.400
basically allows you to do what databases do on disk in memory allows you to mutate your data but
00:20:34.880
within the realm of a transaction so that you can provide the aci guarantees of acid atomicity
00:20:41.840
consistency and isolation closure is
00:20:47.520
gaining a lot of traction i think because it uses software transactional memory for all of its mutations
00:20:54.960
and so any programs used by the enclosure essentially are guaranteed to be concurrent highly concurrent scalable
00:21:03.600
node it doesn't provide durability like that basis because we're not persisting to this
00:21:09.840
uh and i heard yesterday that charlie nutter announced chloe
00:21:15.120
which is his he brought the closure stm library to jruby
00:21:21.919
through his his name chloe gym that sounds pretty awesome i haven't really looked at it in depth but
00:21:28.000
that's definitely one thing i would recommend right now atomic instructions are the other thing
00:21:34.400
that i wanted to mention they've been around for almost 20 years now there's two
00:21:40.159
atomic instructions that i'm aware of that are very important there's exchange which allows you to
00:21:45.600
swap registers autonomously and then there's swapping a register value with a memory location autonomically
00:21:53.120
but more important i think is a compare and exchange parent set operation
00:21:58.640
compare and set is the fundamental operation necessary to do optimistic locking so once we have this compare and set
00:22:05.679
operation we can do we can write higher level data structures that use compare and set
00:22:13.039
to provide essentially software transactional memory
00:22:18.720
unfortunately i'm not aware of any code that provides atomics on mri
00:22:25.120
but jruby has access to atomics because the java util concurrent package has a
00:22:30.960
tonics in it so i wrote my my 10 million counter benchmark with the comics
00:22:37.760
the first block off of course doing it with a single thread and the bottom block doing five threads
00:22:44.400
and the the results are interesting with one thread i got two seconds
00:22:51.840
with five threads i got one second so why didn't it speed up five times
00:22:57.600
well twice because i've only got two chords so i can only do two things at once
00:23:03.600
but that's awesome getting a 2x speed up with two chords that's a linear linear speed up and
00:23:08.799
that's that's awesome scalability what's up nick can you please buy bigger machines for your demos in the future
00:23:17.039
if engine yard wants to choose i'm happy to take any sort of deep core
00:23:23.280
mac pro the engineering program
00:23:29.120
but that's awesome and that shows the power of atomics right there there's no contention at all
00:23:34.720
it's an extremely low level built into the intel chipset and it scales really well at least for two cars now
00:23:40.720
you know to docker in this point if i had 32 cores i don't know that i get 32 times speed up
00:23:46.559
that's you know usually it tails off but but for two course that's awesome
00:23:53.840
um java has like i said based on that that compare and set operation
00:24:00.000
has two data structures that are lockless a hash map and a queue
00:24:07.919
and these are great data structures to check out if you want
00:24:12.960
to see how the stuff is done under the covers how to provide sort of a higher level data structure built on these very low
00:24:19.840
level primitives
00:24:25.200
so um in conclusion concurrency it's always hard
00:24:30.559
it doesn't matter what you're using i don't care if you're using processes events threads fibers
00:24:36.799
it's always going to be hard you're always going to have race conditions you're you know communicating and coordinating
00:24:43.360
is always going to be difficult but the modern mainstream languages
00:24:49.120
should provide tools and apis that allow us to scale well and and allow us to to have fun when we're
00:24:56.640
writing these systems we don't want to lose that sense of fun and reading
00:25:03.120
so to that end what do we need to do i wish we had a standard after even tonight
00:25:08.880
after if actor rb is not a good api we need something different cellulite's an awesome api
00:25:16.000
but i'd love to see all the implementations come up with sort of a standard api that they they all agree on i'd love to see
00:25:22.880
concurrent data directors i'd love to see someone port those those java structures to ruby
00:25:30.080
i'd love to see a software transactional memory implementation for review java again has several
00:25:36.880
stm implementations like i said chloe yesterday i just heard about yesterday there's another one called
00:25:42.400
multiverse these are job apis that someone that's just waiting for somebody who
00:25:48.320
wants to build a new cool gym to uh to wrap the java api with the arrived guy
00:25:58.159
um if you want to do more research in here there's a couple really interesting projects
00:26:03.919
is a persian word for tapestry i think basically being a play on fiber it is
00:26:10.400
high performance lightweight actors for java and they can
00:26:16.799
spin up hundreds of thousands of actors in a process i'm not sure why they do it
00:26:23.679
disruptor is a really interesting project where this
00:26:29.039
team that was building a trading exchange program
00:26:34.960
they were they were getting about 50 000 operations per second
00:26:40.720
and they decided that that wasn't nearly good enough and so they dropped their
00:26:47.039
main design which used hues and blocks for another data structure that was lockless
00:26:52.880
and they got a 100 x improvement different speed they can do six million
00:26:59.120
per second now on the jvm but it's just an incredible story
00:27:04.720
and then of course actors and and uh in scala and erlang are the uh the old standbys if you want to learn more about
00:27:10.960
actors in general that's all i have for you
00:27:16.240
thank you for listening i'm happy to take questions
00:27:31.440
um
00:27:46.080
so the question was the girl friday library looks very similar to the dispatch library and mac review yeah uh
00:27:52.320
no i've never heard of it actually i'll take a look at it do you mean the grand central grand central fighter yeah it's based on
00:27:58.880
uh i i mean i've heard of brando's back only because it kind of made waves back
00:28:06.000
i think when snow leopard was introduced but i've never looked at it
00:28:11.200
in depth but i'll take a look at it yeah
00:28:16.399
so do you have any books that you can recommend as sort of a practical guide on doing concurrent processing you know
00:28:23.360
i did some back when i was doing my cs degree but that was all possible
00:28:28.799
and most of the books i've found very academic very high level reviews i was kind of hoping for some sort of
00:28:34.640
overview of libraries nothing springs to mind does anybody
00:28:40.559
have any books they recommend for pragmatic concurrency there's the job of concurrency in practice yes
00:28:48.080
and it covers all of the you know it's a practical it's a passion theory
00:28:54.240
but it's also a practical look at what java can provide
00:29:07.200
so you covered the you cover the technologies that are available to us as rubios for concurrency can you can you
00:29:14.799
speak a little bit about how once once as rubious we have access to
00:29:21.360
your stm or concurrent data structures how should
00:29:26.480
having access to those things inform the way that we design our programs should
00:29:32.080
we just use those data structures in the same way that we would use data structures now or do we need to reformulate the way
00:29:39.200
that we write programs to be more cognizant of how what we do affects the
00:29:44.320
ability of our programs i think it's the latter so what's really interesting about
00:29:49.679
especially i take the language like closure the way that closure is designed because
00:29:55.760
it's immutable you have to plan for every mutation that you want to do and
00:30:01.520
when you mutate data enclosure you don't just mutate it you have to provide closure a
00:30:09.039
a prop essentially which knows how to do the mutation that you want to do and the
00:30:14.080
enclosure will do it for you because it's doing it within a transaction if that transaction rolls
00:30:19.200
back it has to run it again and it'll keep trying to mutate that until that transaction succeeds
00:30:25.679
so effectively when you have something like stm your code has to plan to mutate
00:30:33.279
it's not just a variable that you can willingly increment and
00:30:38.880
and have it work most of the time but that one percent of the time when you get a race condition then all of a sudden things go up so stm forces better
00:30:46.880
practices around mutation of data you mentioned
00:30:53.279
in the process category things like named pipes and
00:30:58.399
just the pipe system called yeah what experience or comments would you have about
00:31:09.360
um to be honest i've never done any pipes work at all so i have no experience does anybody
00:31:15.919
have any comments about using pipes with threads is it does anybody know
00:31:22.880
it works just do it yeah just use a cue though
00:31:28.240
yeah you could just use a cube
00:31:33.519
uh so yesterday at the uh ruby 193 talk we're talking about how a lot of the
00:31:40.720
current experimentation is going on to see how they can improve multi-process
00:31:46.039
performance looking for more efficient ways to communicate between processes rather than looking for more ways to do
00:31:52.159
communication i think it's exactly the wrong direction
00:31:58.960
i think having an injury in the end where you have a different heat for each moving
00:32:04.399
van is exactly the wrong direction it's all about memory efficiency
00:32:09.840
um you know i i put up a blog post on the carbon 5 blog
00:32:15.200
a week or two ago basically where i improved rescues memory efficiency by 68
00:32:20.559
times i had a client that was had a large farm rescue workers
00:32:26.720
and it was using in total 68 gigabytes of memory and i rewrote rescue to use threads in j
00:32:33.279
with e instead of 4k child processes and we can now do that same amount of
00:32:40.159
work with one jbm process using one gigabyte
00:32:45.519
but that's that's very specific to jruby in mri you don't have this problem
00:32:51.679
you can use threads right but you don't have to import is actually efficient in that case
00:32:58.000
it's not it's not copy on right friendly
00:33:03.279
process if you're disabling the gcm brand you're in here
00:33:08.960
you're asking for trouble um yeah really just does not have a good
00:33:14.559
concurrency story um working processing does not work unless you're unless your reflector is
00:33:20.000
popular does the nbn differ from like erlang to
00:33:26.559
processes multiple processes in one process so the nvm process is much heavier
00:33:33.679
weight it's an entire routine system effectively earliest processes are very very
00:33:40.640
lightweight structures they're essentially i believe they're essentially fibers very lightweight fibers
00:33:48.640
so yeah they don't really compare
00:33:54.480
i've always felt the problem with threads was the problem of accidental shared
00:34:00.559
muscle memory so do you have any advice
00:34:06.960
there's not much you can do given that ruby is a mutable system you know
00:34:13.440
best practices uh trying to use stm or other type of systems for all of all the stuff that
00:34:20.240
you do or you know trying to use actor libraries and that sort of thing
00:34:26.159
so that you yourself are not writing code that mutates data you are instead
00:34:31.200
writing actors that how do you deal with the fact that the
00:34:38.560
shared immutable memory could happen in the library that you're using yeah
00:34:43.760
there's not i mean it's the same thing with monkey packing right how do you know a library's not funky patching object to raise
00:34:50.159
every type of method is called who does well one compassion is a little more deterministic yeah the shared beautiful
00:34:55.919
memory problem is that it might happen might it might be happening when you don't know what to get behind these
00:35:01.119
processes at the same time that's true that's true i don't there's no i don't think there's a good answer for your
00:35:06.320
question i i think that's part of the reason the mdf looks uh
00:35:11.599
attractive is that if you avoid me to share your
00:35:17.680
it's a tough thing to do with ruby obviously you're paying the price for that though yeah with the you know
00:35:23.280
non-shared heaps and that's that's really what it comes down to is making trade-offs
00:35:29.520
if you can afford the memory by all means use rescue and forking mode but
00:35:34.880
you know if you're paying for hundreds of machines on ec2 and you want to go down to 10 machines or one machine
00:35:41.440
sometimes you've got to do optimizations that take cut corners and you just hope that your testing
00:35:47.359
uncovers any issues beforehand
00:35:54.240
hi so you mentioned that parking is not a good strategy unless you have a
00:35:59.280
garbage collector that is copyright friendly right i wonder if in your past project if you were using re and if it's
00:36:07.359
not garbage collect currently for some reason because i remember on the right presentation on the blog post they
00:36:13.119
mentioned that they they need to use re specifically just because of the question
00:36:18.320
yeah well our eu's has those guys have done a lot of work to make their garbage bucket coffee analyte friendly
00:36:25.200
and you do definitely see some savings there um obviously reu's problem is they're still
00:36:30.960
stuck on radio one eight and the review world has kind of moved on from there i'm using ruby one night
00:36:36.320
everywhere i haven't used ruby one eight over here so
00:36:43.680
um so yeah when i tend to have problems i tend to move to jremdi
00:36:50.160
it's just just almost better in every
00:37:05.440
any others all right thank you
00:37:52.079
you